WO2022183321A1 - Image detection method, apparatus, and electronic device - Google Patents

Image detection method, apparatus, and electronic device Download PDF

Info

Publication number
WO2022183321A1
WO2022183321A1 PCT/CN2021/078478 CN2021078478W WO2022183321A1 WO 2022183321 A1 WO2022183321 A1 WO 2022183321A1 CN 2021078478 W CN2021078478 W CN 2021078478W WO 2022183321 A1 WO2022183321 A1 WO 2022183321A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
algorithm
image data
detection
Prior art date
Application number
PCT/CN2021/078478
Other languages
French (fr)
Chinese (zh)
Inventor
邱珏沁
柳海波
闻明
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/078478 priority Critical patent/WO2022183321A1/en
Priority to CN202180093086.5A priority patent/CN116888621A/en
Publication of WO2022183321A1 publication Critical patent/WO2022183321A1/en

Links

Images

Classifications

    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the embodiments of the present application relate to the technical field of artificial intelligence, and in particular, to an image detection method, apparatus, and electronic device.
  • AI Artificial Intelligence
  • machine learning methods are usually used to construct initial models of various structures, such as neural network models, support vector machine models, and decision tree models. Then, the initial model is trained by training samples to realize functions such as image detection, speech recognition, etc.
  • the neural network is usually trained to obtain a perception model to realize image detection tasks such as scene recognition, object detection or image segmentation.
  • the sample image data used for training the neural network and the image data to be detected are usually collected by different camera modules. Due to the significant differences in the manufacturing process, photoelectric response function and noise level of different camera modules, there is a large deviation between the detection results of the perception model and the real results in the image detection process. Therefore, when a new camera module is combined with an already trained perception model, how to efficiently improve the accuracy of the detection result of the perception model is a problem that needs to be solved.
  • the image detection method, device and electronic device provided by the present application can improve the accuracy of image detection model inference when a new photographing device is combined with an already trained image detection model.
  • an embodiment of the present application provides an image detection method, the image detection method includes: collecting image data to be detected by a first camera device; processing the image data to be detected by using an image processing algorithm to generate a processing The processed image is input into the image detection model to obtain the detection result; wherein, the parameters of the image processing algorithm are the annotation information of the first sample image data collected by the first camera device by comparing It is obtained by adjusting the detection result of the first sample image data with the image detection model and based on the comparison result.
  • the parameters of the image processing algorithms used to execute multiple image processing processes are adjusted based on the image detection model, so that the image obtained after image processing is performed on the image collected by the first camera device is
  • the style is consistent with the style of the sample image data for training the image detection model, thereby reducing the difference between the image data collected by the camera device and the feature distribution in the high-dimensional space of the sample image data for training the image detection model.
  • the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  • the parameters of the image processing algorithm are determined by the following steps: comparing the detection result with the annotation information of the first sample image data to obtain the comparison Result: Based on the error between the detection result and the annotation information of the sample image data, the parameters of the image processing algorithm are iteratively adjusted; when the preset conditions are satisfied, the parameters of the image processing algorithm are saved.
  • the preset conditions here may include but are not limited to: the error is less than or equal to a preset threshold, or the number of iterations is greater than or equal to a preset threshold.
  • the comparison result is an error
  • the iteratively adjusting the parameters of the image processing algorithm based on the comparison result includes: based on the comparison result and the first The error between the annotation information of the sample image data, construct a target loss function, wherein the target loss function includes the parameters to be adjusted in the image processing algorithm; based on the target loss function, using the back propagation algorithm and Gradient descent algorithm, iteratively adjust the parameters of the image processing algorithm.
  • the processing of the image data and the first sample image data includes at least one of the following: dark current correction, lens shading correction, demosaicing, and white balance correction , tone mapping, contrast enhancement, image edge enhancement, or image noise reduction.
  • the image processing algorithm is executed by an image signal processor; and the parameters of the image processing algorithm include at least one of the following: each pixel of the image in the lens shading correction algorithm The distance from the optical center of the camera; the boundary coordinates of the neutral color region in the image in the white balance correction algorithm; the target brightness and target saturation in the tone mapping algorithm, and the filter kernel parameters used to generate the low-pass filtered image; contrast ratio The contrast threshold in the enhancement algorithm; the edge enhancement factor in the image edge enhancement algorithm; and the spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
  • the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm further include: a neural network for generating the image processing model weight factor.
  • the annotation information of the first sample image data is manually annotated; and the method further includes: converting the first sample image data into suitable Human-annotated color images.
  • the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects .
  • an embodiment of the present application provides a parameter adjustment method for image processing.
  • the parameter adjustment method for image processing includes: using an image processing algorithm to perform image processing on first sample image data to generate a first image data, wherein the first sample image data is collected by a first camera; input the first image data into a pre-trained image detection model to obtain a detection result; compare the detection result with the first The error between the annotation information of a sample image data is obtained, and a comparison result is obtained; based on the comparison result, the parameters of the image processing algorithm are adjusted.
  • the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  • the comparison result is an error
  • the iteratively adjusting the parameters of the image processing algorithm based on the comparison result includes: based on the detection result and the first
  • the error between the annotation information of the sample image data is used to construct a target loss function, wherein the target loss function includes the parameters to be adjusted in the image processing algorithm; based on the target loss function, a back-propagation algorithm is used. and a gradient descent algorithm that iteratively adjusts the parameters of the image processing algorithm.
  • the image processing algorithm includes at least one of the following: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, and image edge enhancement Or image noise reduction.
  • the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera; The boundary coordinates of the color region in the image; the target brightness, target saturation and the filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge in the image edge enhancement algorithm enhancement factor; and the spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
  • the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: a neural network used to generate the image processing model. weight factor.
  • the labeling information of the first sample image data is manually labelled; and the method further includes: converting the first sample image data into Human-annotated color images.
  • the image detection model is used to perform at least one of the following detection tasks: labeling a detection frame, identifying a target object, predicting a confidence level, and predicting a motion trajectory of the target object .
  • an embodiment of the present application provides an image detection device, the image detection device includes: a collection module configured to collect image data to be detected through a first camera device; a processing module configured to use an image processing algorithm to The image data to be detected is processed to generate a processed image; the detection module is configured to input the processed image into an image detection model to obtain a detection result; wherein, the parameters of the image processing algorithm are obtained through It is obtained by comparing the annotation information of the first sample image data collected by the first camera device with the detection result of the first sample image data by the image detection model, and adjusting based on the comparison result.
  • the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  • the parameters of the image processing algorithm are determined by a parameter adjustment module
  • the parameter adjustment module includes: a comparison sub-module configured to The detection result of the image data is compared with the annotation information of the first sample image data to obtain the comparison result; the adjustment sub-module is configured to iteratively adjust the parameters of the image processing algorithm based on the comparison result; save the sub-module The module is configured to save the parameters of the image processing algorithm when the preset condition is satisfied.
  • the comparison result is an error
  • the adjustment sub-module is further configured to: based on the detection result of the first sample image data and the first sample
  • the error between the annotation information of the image data is used to construct a target loss function, wherein the target loss function includes the parameters to be updated in the image processing algorithm; based on the target loss function, back-propagation algorithm and gradient descent are used. algorithm that iteratively updates the parameters of the image processing algorithm.
  • the image processing algorithm includes at least one of the following image processing procedures: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, image Edge enhancement and image noise reduction.
  • the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera; The boundary coordinates of the color region in the image; the target brightness, target saturation and the filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge in the image edge enhancement algorithm enhancement factor; and the spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
  • the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: a neural network used to generate the image processing model. weight factor.
  • the annotation information of the first sample image data is manually annotated; and the method further includes: converting the first sample image data into Human-annotated color images.
  • the image detection model is used to perform at least one of the following detection tasks: labeling a detection frame, identifying a target object, predicting a confidence level, and predicting a motion trajectory of the target object .
  • an embodiment of the present application provides an electronic device, the electronic device includes: a first camera device, configured to collect image data to be detected; an image signal processor, configured to use an image processing algorithm to detect the image data to be detected.
  • the image data is processed to generate a processed image; an artificial intelligence processor is used to input the processed image into an image detection model to obtain a detection result; wherein, the parameters of the image processing algorithm are obtained by comparing the first It is obtained by adjusting the annotation information of the first sample image data collected by a camera device and the detection result of the first sample image data by the image detection model and based on the comparison result.
  • an embodiment of the present application provides an image detection device, the image detection device includes one or more processors and a memory; the memory is coupled to the processor, and the memory is used to store one or more programs ; the one or more processors are configured to run the one or more programs to implement the method according to the first aspect.
  • an embodiment of the present application provides a parameter adjustment apparatus for image processing
  • the parameter adjustment apparatus for image processing includes one or more processors and a memory; the memory is coupled to the processor, and the The memory is used to store one or more programs; the one or more processors are used to execute the one or more programs to implement the method according to the second aspect.
  • embodiments of the present application provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by at least one processor, is used to implement the first aspect or the second aspect the method described.
  • an embodiment of the present application provides a computer program product, which is used to implement the method according to the first aspect or the second aspect when the computer program product is executed by at least one processor.
  • FIG. 1 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an image processing process performed in combination with an ISP and an AI processor provided by an embodiment of the present application;
  • FIG. 3 is a schematic structural diagram of a vehicle provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a system architecture including an electronic device for parameter debugging of an image processing algorithm provided by an embodiment of the present application;
  • FIG. 5 is a flowchart of a parameter debugging method of an image processing algorithm provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of an image detection method provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a parameter debugging apparatus provided by an embodiment of the present application.
  • the corresponding apparatus may include one or more units, such as functional units, to perform one or more of the described method steps (eg, one unit performs one or more steps) , or units, each of which performs one or more of the steps), even if such unit or units are not explicitly described or illustrated in the figures.
  • the corresponding method may contain a step to perform the functionality of the one or more units (eg, a step to perform the one or more units) functionality, or steps, each of which performs the functionality of one or more of the plurality of units), even if such one or more steps are not explicitly described or illustrated in the figures.
  • the image detection method described in the present application can be applied in the field of computer vision, in a scene where an image detection model obtained by training sample images collected by other photographing equipment needs to be combined with a new photographing equipment.
  • the electronic device 100 may be a user equipment (User Equipment, UE), such as various types of devices such as a mobile phone, a tablet computer, a smart screen, or an image capturing device.
  • UE User Equipment
  • the electronic device 100 may also be a vehicle.
  • a camera 101 may be provided in the electronic device 100 for capturing image data.
  • the electronic device 100 may also include or be integrated into a module, chip, chip set, circuit board or component in the electronic device, and the chip or chip set or the circuit board equipped with the chip or chip set can be driven by necessary software Work.
  • the electronic device 100 includes one or more processors, such as an image signal processor (ISP, Image Signal Processor) 102 and an AI processor 103 .
  • the one or more processors can be integrated in one or more chips, and the one or more chips can be regarded as a chipset, when one or more processors are integrated in the same chip
  • the chip is also called a system on a chip (SOC).
  • the electronic device 100 also includes one or more other necessary components, such as memory and the like.
  • the camera device 101 shown in FIG. 1 may be a monocular camera.
  • the camera device 101 may further include multi-camera cameras, and these cameras may be physically combined in one camera device, or may be physically separated into multiple camera devices. Multiple images are captured at the same time by a multi-eye camera, and can be processed according to these images to obtain an image to be detected.
  • the camera 101 may also be in other situations, which are not specifically limited in the embodiments of the present application.
  • the camera 101 can collect image data in real time, or collect image data periodically. The period is such as 3s, 5s, 10s and so on.
  • the camera device 101 may also collect image data in other ways, which are not specifically limited in this embodiment of the present application. After the camera 101 collects the image data, it can transmit the image data to the ISP 102 .
  • the ISP 102 shown in FIG. 1 can set up a plurality of hardware modules or run necessary software programs to process the image data and communicate with the AI processor 103 .
  • ISP102 can be used alone as a component or integrated in other digital logic devices, including but not limited to: CPU (Central Processing Unit, Central Processing Unit), GPU (Graphics Processing Unit, Graphics Processing Unit) or DSP (Digital Signal Processing Unit) processor, Digital Signal Processing).
  • the CPU, GPU and DSP are all processors within a system-on-chip.
  • ISP102 can perform multiple image processing processes, which may include but are not limited to: dark current correction, response nonlinearity correction, shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, edge enhancement, Noise reduction, color correction, and more.
  • the ISP 102 executes the above-mentioned multiple image processing processes by running the image processing algorithm.
  • Each image processing process in the above-mentioned multiple image processing processes can be regarded as an independent image processing process, and thus, the image processing algorithm for executing each image processing process can be regarded as independent.
  • the ISP 102 may include multiple logic modules. For example, it includes, but is not limited to, a dark current correction module, a response nonlinearity correction module, a shading correction module, a demosaicing module, and the like.
  • Each logic module is used to perform an image detection process.
  • Each logic module may use its own specific hardware structure, and multiple logic modules may also share a set of hardware structures, which is not limited in this embodiment of the present application.
  • the one or more image processing processes are typically performed sequentially. For example, after the image data acquired by the camera 101 is provided to the ISP, processing procedures such as dark current correction, response nonlinear correction, shading correction, demosaicing, white balance correction, etc. may be sequentially performed. It should be noted that this embodiment of the present application does not limit the sequence of the image processing processes performed by the ISP. For example, white balance correction may be performed first, and then demosaicing may be performed.
  • the AI processor 103 shown in FIG. 1 may include a special neural processor such as a neural network processor (Neural-network Processing Unit, NPU), including but not limited to a convolutional neural network processor, a tensor processor, or a neural processing unit. engine.
  • NPU Neuro-network Processing Unit
  • the AI processor can be used alone as a component or integrated in other digital logic devices, including but not limited to: CPU, GPU or DSP.
  • the AI processor 103 may run an image detection model, and the image detection model is obtained by training a deep neural network based on the sample image data set S1. This image detection model can perform specific detection tasks.
  • the specific detection task may include, but is not limited to, labeling of detection frames, recognition of target objects, prediction of confidence levels, prediction of motion trajectories of target objects, or image segmentation, and the like.
  • the image detection model is deployed in the AI processor 103 shown in FIG. 1 after the offline end training is completed.
  • the offline end here can be regarded as a server device or a device for model training.
  • the AI processor 103 may also perform one or more image processing operations, the one or more image processing operations Operations may include, but are not limited to: demosaicing, white balance correction, tone mapping, contrast enhancement, edge enhancement, noise reduction, color correction.
  • the AI processor 103 may also run one or more image processing models, where each image processing model is used to execute a specific image processing process.
  • the image data obtained from the camera 101 may undergo multiple image processing processes to generate a final image processing result, and the AI processor 103 may perform one or more of the above image processing processes, That is, corresponding to one or more of the above-mentioned image processing operations, the ISP 102 may also perform one or more of the above-mentioned image processing procedures.
  • the AI processor 103 and the ISP 102 may perform different image processing processes.
  • the AI processor 103 and the ISP 102 may also perform the same image processing process, such as performing further enhancement processing, which is not limited in this embodiment.
  • each image processing model may be obtained by training a neural network by using a machine learning method based on the sample image data set S3.
  • the sample image data set S3 includes: a plurality of sample image data H, and reference image data I corresponding to each sample image data H in the plurality of sample image data H.
  • the reference image data I is used for image data comparison, and the image data to be compared with the reference image data I is: image data obtained after the neural network to be trained performs image processing on the sample image data H.
  • the reference image data I and its corresponding sample image data H present the same scene.
  • the same scene presented here can be understood as: the target objects presented by the reference image data I and the corresponding sample image data H and the positions of the target objects in the image are all the same.
  • the image processing process performed by the image processing model is different, and the sample image dataset S3 used for training the image processing model is also different.
  • each sample image data H included in the sample image data set S3 is a single-channel raw image format of a*b*1 size (RAW, Raw Image Format) image data
  • each reference image data I corresponding to each sample image data is the RGB image data of a*b*3 size, wherein, a is the vertical pixel value of the image, and b is the horizontal pixel value of the image, 1 is single channel (eg R channel, G channel or B channel), 3 is three channel (RGB channel).
  • each sample image data H included in the sample image data set S3 is single-channel RAW image data of a*b*1 size, which is the same as each sample image data H.
  • Each reference image data I corresponding to one sample image data is single-channel image data of a*b*1 size, the sample image data set S3.
  • the reference image data I and its corresponding sample image data H have the same scene but different white balance values.
  • the following describes the training process of the image processing model by taking the image processing process in which the image processing model performs demosaicing as an example.
  • Each sample image data H included in the sample image data set S3 is respectively input to the neural network to be trained to obtain processed image data.
  • the loss function may include, but is not limited to, a mean absolute error (MAE) loss function or a mean square error (MSE) loss function, for example.
  • the loss function includes the weight coefficients of each layer of the neural network to be trained.
  • the back-propagation algorithm and the gradient descent algorithm are used to iteratively adjust the weight coefficient values of each layer network in the neural network to be trained, until the processed image data output by the image processing model and the reference image data I The error between them is less than or equal to the preset threshold, or the number of iterations is less than the preset threshold, and the weight coefficient values of each layer of the neural network to be trained are saved.
  • the neural network to be trained is an image processing model.
  • the ISP102 can be set with multiple ports, and the AI processor 103 can also be set with multiple ports.
  • the ISP102 can pass the processed image data A through multiple ports.
  • One of the ports is provided to the AI processor 103 , the AI processor 103 processes the image data A to generate image data B, and provides the image data B to the ISP 102 through one of the multiple ports.
  • the AI processor 103 performing demosaicing as an example, the combination of the ISP 102 and the AI processor 103 will be described with reference to FIG. 2 .
  • FIG. 2 In FIG.
  • the ISP 102 acquires image data from the camera 101, performs the three image processing processes of dark current correction, response nonlinear correction, and shading correction on the acquired image data, and generates image data A, which is provided to AI through port Vio The input port Vai of the processor 103 .
  • the image processing model run by the AI processor 103 performs demosaic processing on the image data A to generate image data B, and provides the image data B to the input port Vii of the ISP 102 through the output port Vao.
  • the ISP 102 performs subsequent image processing procedures such as white balance correction and color correction on the image data B input through the input port Vii, and generates image data C that is input to the AI processor 103 .
  • the image detection model run by the AI processor 103 can perform image detection processing on the image data C.
  • the AI processor 103 may include one or more.
  • the image processing model for performing image processing and the image processing model for performing image detection The image detection model can be set in the same AI processor 103.
  • the image processing model used to perform image processing and the image detection model used to perform image detection can be set to different AI processing models. in the device 103.
  • the parameters in the image processing algorithm run by the ISP 102 and the parameters in the image processing model run by the AI processor 103 described in the embodiments of the present application are based on the sample image data set S2 collected by the camera 101 and The image detection results of the image detection model are obtained by debugging.
  • the image processing algorithm run by the ISP 102 and the debugging method of each parameter in the image processing model refer to the embodiment shown in FIG. 5 below.
  • the sample image data set S1 used for training the image detection model running in the AI processor 103 is collected through big data, and the camera used for collecting the sample image data set S1 is the same as the camera shown in FIG. 1 .
  • 101 is a different camera device. Since there are significant differences in the manufacturing process, photoelectric response function, noise level and other characteristics of different camera devices, the style of the sample image data D in the sample image data set S1 is different from the image data collected by the camera device 101. There are differences in the styles of the images obtained after processing, which in turn leads to significant differences in the feature distributions in the high-dimensional space between the image data collected by the camera device 101 and the sample image data D in the sample image data set S1, resulting in the deployment of image detection models.
  • the AI processor 103 during the detection process of the image data collected by the camera 101 , the deviation between the detection result and the real result is relatively large, which reduces the detection accuracy of the image detection model deployed in the AI processor 103 .
  • the embodiment of the present application adjusts the parameters of the image processing algorithms used for executing multiple image processing processes (or adjusts the parameters of the image processing algorithms used for executing multiple image processing processes). parameters and parameters of the image processing model), so that the style of the image obtained by performing image processing on the image collected by the camera 101 is consistent with the style of the sample image data D in the sample image data set S1, thereby reducing the
  • the difference between the image data collected by the camera 101 and the feature distribution of the sample image data D in the sample image data set S1 in the high-dimensional space is conducive to improving the accuracy of the image detection model inference.
  • the embodiment of the present application does not require any modification to the trained image detection model, which saves the time and computing power overhead required for retraining and fine-tuning the image detection model; Adjust the parameters of the image processing algorithm used to perform multiple image processing processes. Since the image processing algorithm does not perform the image detection process, the training can be completed with fewer training samples, thereby reducing the need for manual labeling of training samples. quantity, shortening the commissioning cycle when combining image detection models with new cameras.
  • FIG. 3 shows a schematic structural diagram of a vehicle 300 provided by an embodiment of the present application.
  • Components coupled to or included in vehicle 300 may include control system 10 , propulsion system 20 , and sensor system 30 . It should be understood that the vehicle 300 may also include more systems, which will not be repeated here.
  • the control system 10 may be configured to control the operation of the vehicle 300 and its components.
  • the ISP 102 and the AI processor 103 shown in FIG. 1 can be set in the control system 10.
  • the control system 10 can also include devices such as a central processing unit, a memory, and the like, and the memory is used to store the instructions and data required for the operation of each processor .
  • Propulsion system 20 may be used for vehicle 300 to provide powered motion, which may include, but is not limited to, an engine/motor, energy source, transmission, and wheels.
  • the sensor system 104 may include, but is not limited to, a global positioning system, an inertial measurement unit, a lidar sensor, or a millimeter-wave radar sensor.
  • the camera device 101 shown in FIG. 1 may be provided in the sensor system 30 .
  • the components and systems of vehicle 300 may be coupled together through a system bus, network, and/or other connection mechanism to operate in interconnection with other components within and/or outside of their respective systems. In specific work, various components in the vehicle 300 cooperate with each other to realize various automatic driving functions.
  • the automatic driving function may include, but is not limited to, blind spot detection, parking assist or lane change assist, and the like.
  • the camera device 101 may periodically collect image data, and provide the collected image data to the ISP 102 .
  • the ISP102 (or the image processing model in the ISP102 and the AI processor 103) processes the image data by performing multiple image processing processes, and converts it into image data that can be recognized or calculated by the image detection model running in the AI processor 103.
  • the AI processor 103 enables the AI processor 103 to realize the reasoning or detection of a specific task, and generate the detection result.
  • Other components in the control system 10 for example, a CPU that executes decisions) control other devices or components to perform automatic driving functions based on the detection results of the AI processor 103 .
  • the manufacturer of the vehicle may not produce some parts by itself.
  • the trained image detection model is ordered through manufacturer A, and the camera device is ordered through manufacturer B.
  • the training method described in the embodiments of the present application can be used to debug the parameters of the image processing algorithm or the image processing model used to execute the image processing flow. For another example, when a manufacturer upgrades certain models of vehicles, it is necessary to replace a camera device of a different model from the previously configured camera device.
  • the training method described in the embodiments of the present application can also be used to Debug the parameters of the image processing algorithm or image processing model used to execute the image processing flow.
  • the parameter debugging of the image processing algorithm or the image processing model may be completed at the offline end (or in other words, the training is completed in the server or the device used for model training).
  • the image processing algorithm can be deployed in the ISP of the terminal.
  • the image processing model can be deployed in the AI processor 103 .
  • FIG. 4 shows a schematic diagram 400 of a system architecture including an electronic device for parameter debugging of an image processing algorithm provided by an embodiment of the present application.
  • the system architecture 400 includes a camera device 101 , a parameter debugging device 401 , a storage device 402 and a display device 403 .
  • the camera 101 is used for collecting a plurality of sample image data E, and storing the collected sample image data E in the storage device 402 .
  • the imaging device 101 and the imaging device 101 shown in FIG. 1 are the same (or the same) imaging device.
  • the storage device 402 may include, but is not limited to, read-only memory or random access memory, and the like. It is used to store sample image data E.
  • the storage device 402 may also store executable programs and data of an image processing algorithm for executing the image processing process, and executable programs and data of an image detection model for executing the image detection.
  • the parameter debugging device 401 can run the image processing algorithm and the image detection model, and the parameter debugging device 401 can also call the sample image data E, the executable program and data of the image processing algorithm for executing the image processing process, and the data for the image processing from the storage device 101.
  • the executable program and data of the image detection model that performs image detection to debug the parameters of the image processing algorithm.
  • the parameter debugging device 401 may also store the data generated by the operation and the debugging result after each parameter debugging of the image processing algorithm into the storage device 402 .
  • the parameter debugging device 401 and the storage device 402 may also be provided with I/O ports for data interaction with the display device 403 .
  • the display device 403 may include a display device such as a screen to mark the sample image data E.
  • the parameter debugging device 401 may acquire sample image data E from the storage device 402 , perform image processing on the sample image data E, and provide the sample image data E to the display device 403 for presentation in the display device 403 .
  • the user will mark the sample image data E through the display device 403 , and store the marking information of the sample image data E in the storage device 402 .
  • the sample image data E output by the camera 101 is a high bit depth (for example, 16bit, 20bit or 24bit) single-channel Linear RAW image data, the dynamic range of which is much larger than the dynamic range that can be displayed by the monitor.
  • the sample image data E is a color filter array (CFA, color filter array) image, which does not have color information, so it is difficult for the annotator to extract the image from the camera.
  • Each target object is identified in the sample image data E output by the device 101 .
  • the parameter debugging device 401 also runs an image processing algorithm T, and the image processing algorithm T is used to process the sample image data E to generate a color that can be presented on a display and has suitable brightness and color.
  • An image such as an RGB image, is convenient for the annotator to annotate the target object presented in the sample image data E.
  • the image processing flow performed by the image processing algorithm T may include, but is not limited to: system error correction, global tone mapping, demosaicing or white balance correction.
  • Each parameter in the image processing algorithm T does not need to be adjusted, which can be achieved by using traditional image processing algorithms.
  • the image processing algorithm T described in the embodiments of the present application is used to process the sample image data E to generate image data that can be displayed on the display for annotators to mark;
  • the image processing algorithm is used to process the sample image data E to generate image data for image detection by the image detection model, and its parameters need to be adjusted.
  • FIG. 5 shows a flow 500 of a method for debugging parameters of an image processing algorithm provided by an embodiment of the present application.
  • the execution body of the parameter debugging method for image processing described in the embodiments of the present application may be the parameter debugging device 401 shown in FIG. 4 .
  • the parameter debugging method for image processing includes the following steps:
  • Step 501 based on the sample image data set S2 , use an image processing algorithm to process each sample image data E in the sample image data set S2 to generate a plurality of image data F.
  • the sample image data set S2 includes a plurality of sample image data E and label information of each sample image data E.
  • each sample image data E in the sample image data set S2 is collected by the camera 101 as shown in FIG. 1 .
  • the annotation information of the sample image data E is annotated based on the detection content performed by the image detection model. For example, when the image detection model is used to perform target detection, the annotation information of the sample image data E may include the target object and the position of the target object in the second sample image; when the image detection model is used to perform pedestrian intent detection, the sample image The annotation information of the data E may include the target object and the action information of the target object.
  • the image processing algorithms herein are used to perform one or more image processing procedures.
  • the one or more image processing procedures include, but are not limited to, dark current correction, response nonlinearity correction, lens shading correction, demosaicing, white balance correction, tone mapping, noise reduction, contrast enhancement or edge enhancement, and the like. It should be noted that the one or more image processing processes are usually performed sequentially. This embodiment of the present application does not specifically limit the execution order of the image processing process.
  • step 502 the image data F is detected by using the image detection model, and a detection result is generated.
  • the image detection model may perform at least one of the following detections: object detection, lane line detection, or pedestrian intent detection, and the like.
  • the image detection model is obtained by training a deep neural network based on the image dataset S1. It should be noted that the image data D in the image data set S1 is collected by other imaging devices different from the imaging device 101 .
  • the training method of the image detection model is a traditional technology, and details are not described here.
  • Step 503 based on the detection result and the label information of the sample image data E, adjust the parameters of the image processing algorithm.
  • the parameters of the image processing algorithm may be adjusted by using a machine learning method.
  • the second possible implementation manner is described in detail below.
  • a loss function is constructed based on the error between the detection result of each sample image data E in the sample image data set S2 and the label information of the sample image data E.
  • the loss function may include, but is not limited to, a cross-entropy function and the like.
  • the parameters of the image processing module for executing one or more image processing procedures in the image processing algorithm are adjusted by using the back-propagation algorithm and the gradient descent algorithm.
  • the gradient descent algorithm may specifically include, but is not limited to, optimization algorithms such as SGD and Adam.
  • the chain rule can be used to calculate the gradient of the preset loss function with respect to each parameter in the image processing algorithm.
  • the image processing algorithms used to execute each image processing flow are independent of each other.
  • the image processing flow performed by the image processing algorithm is dark current correction, response nonlinear correction, lens shading correction, demosaicing, white balance correction, noise reduction, contrast enhancement or edge enhancement, etc.
  • the parameters that are propagated and adjusted first are the parameters of the image processing algorithm for performing edge enhancement, and then the parameters of the image processing algorithms that are adjusted sequentially are the image processing parameters for performing contrast enhancement.
  • the embodiments of the present application may include more or less image processing processes, and accordingly may include more or less parameters to be adjusted.
  • the embodiment of the present application does not limit the order of the image processing flow performed by the image processing algorithm, when using the backpropagation algorithm to adjust the parameters of the image processing algorithm, the first adjusted parameter and the last adjusted parameter.
  • Step 504 Determine whether the loss value of the preset loss function is less than or equal to a preset threshold. If the loss value of the preset loss function is less than or equal to the preset threshold, the parameters of the image processing algorithm are saved; if the loss value of the preset loss function is greater than the preset threshold, step 505 is executed.
  • Step 505 Determine whether the number of times of iteratively adjusting the parameters of the image processing algorithm is greater than or equal to a preset threshold. If the number of times of iteratively adjusting the parameters of the image processing algorithm is greater than or equal to the preset threshold, the parameters of the image processing algorithm are saved, and if the number of times of iteratively adjusting the parameters of the image processing algorithm is less than the preset threshold, continue to execute steps 501-505.
  • the above-mentioned one or more image processing procedures may all be implemented by traditional image processing algorithms.
  • the one or more image processing procedures include dark current correction, response nonlinearity correction, lens shading correction, demosaicing, white balance correction, tone mapping , Noise Reduction, Contrast Enhancement and Edge Enhancement.
  • the parameters of the image processing algorithm to be adjusted may include, but are not limited to: parameters of an image processing algorithm for performing lens shading correction, parameters for an image processing algorithm for performing white balance correction, parameters for performing tone mapping Parameters of an image processing algorithm, parameters of an image processing algorithm for performing contrast enhancement, parameters for an image processing algorithm for performing edge enhancement, and parameters for an image processing algorithm for performing noise reduction.
  • parameters of an image processing algorithm for performing lens shading correction parameters for an image processing algorithm for performing white balance correction
  • parameters for performing tone mapping Parameters of an image processing algorithm parameters of an image processing algorithm for performing contrast enhancement
  • parameters for an image processing algorithm for performing edge enhancement parameters for an image processing algorithm for performing noise reduction.
  • Lens shading correction is used to correct the illuminance attenuation caused by the increase of the incident angle of the chief ray in the edge area of the image. It uses a polynomial to fit the illuminance attenuation surface, where the independent variable of the polynomial is the distance between each pixel of the image and the optical center of the camera device . Therefore, in the image processing algorithm for performing lens shading correction, the parameter to be adjusted is the distance between each pixel of the image and the optical center of the camera, that is, the value of the independent variable in the polynomial.
  • the execution process of white balance correction is as follows: first, the neutral color pixel search algorithm is used to screen the neutral color area in the image, and based on the screening result, the boundary coordinates of the neutral color area in the image are determined. Then, the pixel values in the filtered neutral regions are weighted using the luminance channel of the image to generate a binarized neutral pixel mask. The individual (near) neutral pixels are then weighted averaged using this neutral pixel mask to obtain an estimate of the color of the light source in the image. Finally, by calculating the ratio between the RGB channels of the light source color, the white balance correction coefficient corresponding to the image is obtained, and the white balance correction coefficient is applied to the original image, that is, the white balance corrected image is obtained. Therefore, in the image processing algorithm for performing white balance correction, the parameter to be adjusted is the boundary coordinates of the neutral color area in the image.
  • Tone mapping is used to receive a linear image with high bit depth, convert the linear image into a nonlinear image, and complete the compression of the image bit depth, and output an 8-bit image.
  • the trainable parameter is the ⁇ parameter; when using the logarithmic transformation algorithm to compress the dynamic range of the linear image, the trainable parameter is the base of the logarithmic transformation; when using a more complex order
  • a tone mapping model such as a retinax model based on the dynamic range response of the human eye, can be trained as a target luminance parameter (key), a target saturation parameter (saturation), and a filter kernel parameter for generating a low-pass filtered image.
  • Contrast Enhancement is used to enhance the contrast of an image.
  • the CLAHE contrast limited adaptive histogram equalization, contrast limited adaptive histogram equalization
  • the CLAHE algorithm contains two adjustable parameters: the contrast threshold parameter and the sub-patch size used for histogram statistics.
  • the size of the sub-image block may be fixed, and only the contrast threshold parameter may be adjusted. Further, the size of the sub-image block can be fixed to the size of the input image.
  • the Y-channel image in the received image is first subjected to Gaussian filtering to obtain a low-pass Y-channel image Y L ; the difference image between the original Y-channel image and the low-pass Y-channel image Y L
  • the high-frequency signal usually corresponds to the edge area in the image; by amplifying the intensity of the high-frequency signal and superimposing it into the low-pass Y-channel image Y L , it can be
  • the parameter to be adjusted is the edge enhancement factor ⁇ .
  • the bilateral filter noise reduction algorithm is usually used.
  • the trainable parameters can include: the spatial Gaussian kernel parameter ⁇ s used to control the relationship between the noise reduction intensity and the spatial distance, and the pixel used to control the relationship between the noise reduction intensity and the response value difference Value domain Gaussian kernel parameter ⁇ r.
  • part of the image processing flow in the above one or more image processing flows may be implemented by an image processing model.
  • the above-mentioned image processing processes such as demosaicing, white balance correction, tone mapping, noise reduction, contrast enhancement and edge enhancement are implemented by an image processing model.
  • the image processing model may include a variety of image processing models, each of which is used to perform a specific image processing operation.
  • a noise-cancelling image processing model is used to perform a noise-cancelling image processing operation
  • a demosaicing image processing model is used to perform a demosaicing image processing operation.
  • Each image processing model may be obtained by using a traditional neural network training method, and using training samples to train a multi-layer neural network, and the training method will not be repeated in this embodiment of the present application.
  • the adjusted image processing algorithm is the weight matrix (the weight matrix formed by the vectors W of many layers) of all layers forming the image processing model. It should be noted that the above one or more image processing models are pre-trained at the offline end.
  • the parameters of the image processing algorithm are adjusted using the parameter debugging method described in this application, it is only necessary to fine-tune the parameters of the neural network used to form the image processing model, so that the image processing model can obtain the style of the image after processing the image. features, similar to the style features of the sample image data used to train the image detection model. Thereby, the detection accuracy of the image detection model is improved.
  • FIG. 6 is a schematic diagram of a specific application of the parameter debugging method for image processing according to an embodiment of the present application.
  • the execution subject of the parameter debugging method for image processing described in FIG. 6 may be the parameter debugging device 401 shown in FIG. 4 .
  • Step 601 using the camera 101 to collect a plurality of sample image data E.
  • the plurality of sample image data E are single-channel linear RAW image data of high bit depth (eg, 16 bits, 20 bits, or 24 bits).
  • Step 602 using the image processing algorithm T to process the sample image data E, to generate the processed image data F for presentation on the display screen.
  • the image data F is a color image, such as an RGB image.
  • Step 603 Manually label the image data F to obtain labeling information of each sample image data E. Since the detection performed by the image detection model is target detection, the annotation information of the sample image data includes the category of the target object presented by the sample image data and the position in the sample image data.
  • Step 604 using an image processing algorithm to process the sample image data to generate image data D.
  • Step 605 Input the image data D into the image detection model to obtain an image detection result, wherein the image detection result includes the position area of the preset target object in the sample image data and the probability value of the preset target object.
  • Step 606 Construct a loss function based on the image detection result and the labeling information of the sample image data E.
  • N( ⁇ ) is used to represent the image detection model, so we have in Represents the loss function of the image detection model.
  • Y out represents the image data input by the image detection model, that is, the image data finally output by the image processing algorithm by executing multiple image processing procedures.
  • Step 607 Determine whether the loss value of the loss function reaches a preset threshold. If the preset threshold is not reached, step 508 is performed, and if the preset threshold is reached, the parameters of the image processing algorithm are saved.
  • Step 608 using the back-propagation algorithm and the gradient descent algorithm to adjust the parameters of the image detection algorithm.
  • ⁇ (t+1) is the value of the current value of ⁇ (t) after stepping a distance in the opposite direction of its gradient
  • represents the learning rate, which is used to control the step size of each iteration.
  • the value of the edge enhancement factor ⁇ after one iteration is:
  • the adjustment process of the parameters of the image detection algorithm for executing other image processing procedures may be similar to the adjustment process of the parameters of the image detection algorithm for edge enhancement, and details are not repeated here.
  • an embodiment of the present application further provides an image detection method.
  • FIG. 7 shows a process 700 of an image detection method provided by an embodiment of the present application.
  • the execution subject of the image detection method described in FIG. 7 may be the ISP processor and the AI processor described in FIG. 1 .
  • the image detection method includes the following steps:
  • Step 701 the image data to be detected is collected by the camera 101 .
  • Step 702 using an image processing algorithm to process the image data to be detected to generate a processed image.
  • Step 703 Input the processed image into the image detection model to obtain a detection result.
  • the parameters of the image processing algorithm described in step 702 may be obtained after debugging using the parameter debugging method for image processing as described in FIG. 5 .
  • the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects.
  • the electronic device includes corresponding hardware and/or software modules for executing each function.
  • the present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
  • the above one or more processors may be divided into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 8 shows a possible schematic diagram of the composition of the image detection apparatus 800 involved in the above embodiment.
  • the image detection apparatus 800 may include: Collection module 801 , processing module 802 and detection module 803 .
  • the acquisition module 801 is configured to collect image data to be detected through the first camera device; the processing module 802 is configured to process the image data to be detected by using an image processing algorithm to generate a processed image; the detection module 803 , which is configured to input the processed image into an image detection model to obtain a detection result; wherein, the parameters of the image processing algorithm are the annotations of the first sample image data collected by comparing the first camera device The information and the detection result of the image detection model on the first sample image data are obtained by adjusting based on the comparison result.
  • the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  • the parameters of the image processing algorithm are determined by a parameter adjustment module, and the parameter adjustment module (not shown in the figure) includes: a comparison sub-module (not shown in the figure), which is is configured to compare the detection result of the first sample image data with the annotation information of the first sample image data to obtain the comparison result; the adjustment sub-module (not shown in the figure) is configured to be based on The comparison result iteratively adjusts the parameters of the image processing algorithm; the saving sub-module (not shown in the figure) is configured to save the parameters of the image processing algorithm when a preset condition is satisfied.
  • a comparison sub-module not shown in the figure
  • the comparison result is an error
  • the adjustment sub-module (not shown in the figure) is further configured to: based on the detection result of the first sample image data and the first The error between the annotation information of the sample image data is used to construct a target loss function, wherein the target loss function includes the parameters to be updated in the image processing algorithm; based on the target loss function, using the back propagation algorithm and A gradient descent algorithm that iteratively updates the parameters of the image processing algorithm.
  • the image processing algorithm includes at least one of the following: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, image edge enhancement, and image noise reduction.
  • the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera device; the neutral color area in the white balance correction algorithm is in the image the boundary coordinates in the tone mapping algorithm; the target brightness, target saturation, and filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge enhancement factor in the image edge enhancement algorithm; and the image The spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the noise reduction algorithm.
  • the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: weight coefficients of the neural network used to generate the image processing model.
  • the annotation information of the first sample image data is manually annotated; and the apparatus further includes: a conversion module (not shown in the figure) configured to convert the first sample image data into The sample image data is converted into color images suitable for human annotation.
  • a conversion module (not shown in the figure) configured to convert the first sample image data into The sample image data is converted into color images suitable for human annotation.
  • the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects.
  • the image detection apparatus 800 provided in this embodiment is configured to execute the image detection method executed by the electronic device 100, and can achieve the same effect as the above-mentioned implementation method.
  • the modules corresponding to FIG. 8 can be implemented in software, hardware or a combination of the two.
  • each module can be implemented in software to drive the ISP 102 and the AI processor 103 in the electronic device 100 shown in FIG. 1 .
  • each module may include a corresponding processor and a corresponding driver software.
  • FIG. 9 shows a possible schematic diagram of the composition of the parameter debugging apparatus 900 for image processing involved in the above embodiment.
  • the image processing parameter debugging apparatus 900 may include: a processing module 901 , a detection module 902 , a comparison module 903 and an adjustment module 904 .
  • the processing module 901 is configured to perform image processing on the first sample image data by using an image processing algorithm to generate first image data, wherein the first sample image data is collected by a first camera; the detection module 902 , is configured to input the first image data into a pre-trained image detection model to obtain a detection result; a comparison module 903 is configured to compare the detection result with the annotation information of the first sample image data The error is obtained, and a comparison result is obtained; the adjustment module 904 is configured to adjust the parameters of the image processing algorithm based on the comparison result.
  • the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  • the comparison result is an error
  • the adjustment module is configured to: construct a target based on the error between the detection result and the annotation information of the first sample image data A loss function, wherein the target loss function includes parameters to be adjusted in the image processing algorithm; based on the target loss function, the parameters of the image processing algorithm are iteratively adjusted by using a backpropagation algorithm and a gradient descent algorithm.
  • the image processing algorithm includes at least one of the following: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, image edge enhancement, or image noise reduction.
  • the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera device; the neutral color area in the white balance correction algorithm is in the image the boundary coordinates in the tone mapping algorithm; the target brightness, target saturation, and filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge enhancement factor in the image edge enhancement algorithm; and the image The spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the noise reduction algorithm.
  • the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: weight coefficients of the neural network used to generate the image processing model.
  • the annotation information of the first sample image data is manually annotated; and the apparatus further includes: a conversion module (not shown in the figure) configured to convert the first sample image data into The sample image data is converted into color images suitable for human annotation.
  • a conversion module (not shown in the figure) configured to convert the first sample image data into The sample image data is converted into color images suitable for human annotation.
  • the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects.
  • the image detection apparatus 800 may include at least one processor and memory. Wherein, at least one processor can call all or part of the computer program stored in the memory to control and manage the actions of the electronic device 100, for example, it can be used to support the electronic device 100 to perform the steps performed by the above-mentioned modules.
  • the memory may be used to support the execution of the electronic device 100 by storing program codes and data, and the like.
  • the processor may implement or execute various exemplary logic modules described in conjunction with the present disclosure, which may be a combination of one or more microprocessors that implement computing functions, such as, but not limited to, the image signal shown in FIG. 1 . Processor 101 and AI Processor 103.
  • the microprocessor combination may also include a central processing unit, a controller, and the like.
  • the processor may include other programmable logic devices, transistor logic devices, or discrete hardware components in addition to the processors shown in FIG. 1 .
  • the memory may include random access memory (RAM) and read only memory (ROM), among others.
  • the random access memory may include volatile memory (such as SRAM, DRAM, DDR (Double Data Rate SDRAM, Double Data Rate SDRAM) or SDRAM, etc.) and non-volatile memory.
  • the RAM can store data (such as image processing algorithms, etc.) and parameters required for the operation of the ISP102 and the AI processor 103, intermediate data generated by the operation of the ISP102 and the AI processor 103, image data processed by the ISP102, and the AI processor 103.
  • Executable programs of the ISP 102 and the AI processor 103 may be stored in the read-only memory ROM. Each of the above components can perform their own work by loading an executable program.
  • the executable program stored in the memory can execute the image detection method as described in FIG. 7 .
  • the image detection apparatus 900 may include at least one processor and storage device. Wherein, at least one processor can call all or part of the computer program stored in the memory to control and manage the actions of the parameter debugging device 401 as shown in FIG. step.
  • the memory may be used to support the execution of the parameter debugging device 401 to store program codes and data, and the like.
  • the processor can implement or execute various exemplary logic modules described in conjunction with the disclosure of the present application, which can be one or more microprocessor combinations that implement computing functions, including but not limited to a central processing unit and a controller, etc. .
  • the processor may also include other programmable logic devices, transistor logic devices, or discrete hardware components, or the like.
  • the memory may include random access memory (RAM), read only memory ROM, and the like.
  • the random access memory can include volatile memory (such as SRAM, DRAM, DDR (Double Data Rate SDRAM, Double Data Rate SDRAM) or SDRAM, etc.) and non-volatile memory.
  • the RAM may store data (such as image processing algorithms) and parameters required for the operation of the parameter debugging device 401, intermediate data generated by the parameter debugging device 401, and output results after the parameter debugging device 401 runs.
  • An executable program of the parameter debugging device 401 may be stored in the read-only memory ROM. Each of the above components can perform their own work by loading an executable program.
  • the executable program stored in the memory can execute the parameter adjustment method described in FIG. 5 or FIG. 6 .
  • This embodiment further provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on the computer, the computer executes the above-mentioned related method steps to realize the image detection in the above-mentioned embodiment.
  • the image detection method of the apparatus 800, or the parameter adjustment method of the parameter adjustment apparatus 900 in the above-mentioned embodiment is implemented.
  • This embodiment also provides a computer program product, when the computer program product is run on a computer, it causes the computer to execute the above-mentioned relevant steps, so as to realize the image detection method of the image detection apparatus 800 in the above-mentioned embodiment, or to realize the above-mentioned embodiment.
  • the computer-readable storage medium or computer program product provided in this embodiment is used to execute the corresponding method provided above. Therefore, for the beneficial effect that can be achieved, reference may be made to the corresponding method provided above. The beneficial effects will not be repeated here.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium.
  • a readable storage medium includes several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application.
  • the aforementioned readable storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk, etc. that can store program codes. medium.

Abstract

The embodiments of the present application provide an image detection method, an apparatus and an electronic device. The image detection method comprises: acquiring, by means of a first camera apparatus, image data to be detected; using an image processing algorithm to process said image data, so as to generate a processed image; and inputting the processed image into an image detection model to obtain a detection result, the parameters of the image processing algorithm being obtained by comparing annotation information of first sample image data acquired by the first camera apparatus with the detection result of the image detection model on the first sample image data and performing adjustment on the basis of the comparison result. The image detection method provided in the present application improves the accuracy of inference of an image detection model when a new photographing apparatus is combined with a trained image detection model.

Description

图像检测方法、装置和电子设备Image detection method, device and electronic device 技术领域technical field
本申请实施例涉及人工智能技术领域,尤其涉及一种图像检测方法、装置和电子设备。The embodiments of the present application relate to the technical field of artificial intelligence, and in particular, to an image detection method, apparatus, and electronic device.
背景技术Background technique
随着科学技术的发展,人工智能(AI,Artificial Intelligence)技术得到突飞猛进的提升。在一些人工智能技术中,通常采用机器学习的方法,构建各种结构的初始模型,例如神经网络模型、支持向量机模型、决策树模型等。然后,通过训练样本对初始模型进行训练,以实现图像检测、语音识别等功能。当前,基于图像检测的计算机视觉技术中,通常对神经网络训练得到感知模型,实现场景识别、物体检测或者图像分割等图像检测任务。With the development of science and technology, artificial intelligence (AI, Artificial Intelligence) technology has been improved by leaps and bounds. In some artificial intelligence technologies, machine learning methods are usually used to construct initial models of various structures, such as neural network models, support vector machine models, and decision tree models. Then, the initial model is trained by training samples to realize functions such as image detection, speech recognition, etc. At present, in the computer vision technology based on image detection, the neural network is usually trained to obtain a perception model to realize image detection tasks such as scene recognition, object detection or image segmentation.
相关计算机视觉技术中,用于训练神经网络的样本图像数据与待检测的图像数据通常由不同的相机模组采集。由于不同的相机模组在制造工艺、光电响应函数以及噪声水平等方面上存在较为显著的差异,导致感知模型在图像检测过程中的检测结果与真实结果之间具有较大的偏差。由此,当新的相机模组与已经训练的感知模型相结合时,如何高效的提高感知模型检测结果的准确性为需要解决的问题。In related computer vision technologies, the sample image data used for training the neural network and the image data to be detected are usually collected by different camera modules. Due to the significant differences in the manufacturing process, photoelectric response function and noise level of different camera modules, there is a large deviation between the detection results of the perception model and the real results in the image detection process. Therefore, when a new camera module is combined with an already trained perception model, how to efficiently improve the accuracy of the detection result of the perception model is a problem that needs to be solved.
发明内容SUMMARY OF THE INVENTION
本申请提供的图像检测方法、装置和电子设备,当新的拍摄装置与已经训练的图像检测模型相结合时,可以提高图像检测模型推理的准确性。The image detection method, device and electronic device provided by the present application can improve the accuracy of image detection model inference when a new photographing device is combined with an already trained image detection model.
为达到上述目的,本申请采用如下技术方案:To achieve the above object, the application adopts the following technical solutions:
第一方面,本申请实施例提供一种图像检测方法,该图像检测方法包括:通过第一摄像装置采集待检测的图像数据;利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像;将所述处理后的图像输入至图像检测模型,得到检测结果;其中,所述图像处理算法的参数是通过比较所述第一摄像装置采集的第一样本图像数据的标注信息和所述图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的。In a first aspect, an embodiment of the present application provides an image detection method, the image detection method includes: collecting image data to be detected by a first camera device; processing the image data to be detected by using an image processing algorithm to generate a processing The processed image is input into the image detection model to obtain the detection result; wherein, the parameters of the image processing algorithm are the annotation information of the first sample image data collected by the first camera device by comparing It is obtained by adjusting the detection result of the first sample image data with the image detection model and based on the comparison result.
本申请实施例通过对用于执行多个图像处理过程的图像处理算法的参数进行调整,并且是基于图像检测模型进行调整,使得对第一摄像装置采集的图像进行图像处理后所得到的图像的风格,与训练该图像检测模型的样本图像数据的风格相一致,从而降低摄像装置所采集的图像数据与训练该图像检测模型的样本图像数据在高维度空间中的特征分布之间的差异,有利于提高图像检测模型推理的准确性。In the embodiment of the present application, the parameters of the image processing algorithms used to execute multiple image processing processes are adjusted based on the image detection model, so that the image obtained after image processing is performed on the image collected by the first camera device is The style is consistent with the style of the sample image data for training the image detection model, thereby reducing the difference between the image data collected by the camera device and the feature distribution in the high-dimensional space of the sample image data for training the image detection model. There are It is beneficial to improve the accuracy of image detection model inference.
基于第一方面,在一种可能的实现方式中,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。Based on the first aspect, in a possible implementation manner, the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
基于第一方面,在一种可能的实现方式中,所述图像处理算法的参数通过如下步骤 确定:将所述检测结果和所述第一样本图像数据的标注信息进行比较以得到所述比较结果;基于所述检测结果和样本图像数据的标注信息之间的误差,迭代调整所述图像处理算法的参数;在预设条件满足时,保存所述图像处理算法的参数。Based on the first aspect, in a possible implementation manner, the parameters of the image processing algorithm are determined by the following steps: comparing the detection result with the annotation information of the first sample image data to obtain the comparison Result: Based on the error between the detection result and the annotation information of the sample image data, the parameters of the image processing algorithm are iteratively adjusted; when the preset conditions are satisfied, the parameters of the image processing algorithm are saved.
这里的预设条件可以包括但不限于:误差小于或等于预设阈值,或者迭代次数大于等于预设阈值。The preset conditions here may include but are not limited to: the error is less than or equal to a preset threshold, or the number of iterations is greater than or equal to a preset threshold.
基于第一方面,在一种可能的实现方式中,所述比较结果为误差,所述基于所述比较结果迭代调整所述图像处理算法的参数,包括:基于所述比较结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括所述图像处理算法中待调整的参数;基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代调整所述图像处理算法的参数。Based on the first aspect, in a possible implementation manner, the comparison result is an error, and the iteratively adjusting the parameters of the image processing algorithm based on the comparison result includes: based on the comparison result and the first The error between the annotation information of the sample image data, construct a target loss function, wherein the target loss function includes the parameters to be adjusted in the image processing algorithm; based on the target loss function, using the back propagation algorithm and Gradient descent algorithm, iteratively adjust the parameters of the image processing algorithm.
基于第一方面,在一种可能的实现方式中,对所述图像数据以及所述第一样本图像数据的处理包括以下至少一项:暗电流校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、对比度增强、图像边缘增强或者图像降噪。Based on the first aspect, in a possible implementation manner, the processing of the image data and the first sample image data includes at least one of the following: dark current correction, lens shading correction, demosaicing, and white balance correction , tone mapping, contrast enhancement, image edge enhancement, or image noise reduction.
基于第一方面,在一种可能的实现方式中,所述图像处理算法是通过图像信号处理器执行;以及所述图像处理算法的参数包括以下至少一项:镜头阴影校正算法中的图像各个像素与摄像装置光心距离;白平衡校正算法中的中性色区域在图像中的边界坐标;阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;对比度增强算法中的对比度阈值;图像边缘增强算法中的边缘增强因子;以及图像降噪算法中的空间域高斯参数以及像素值域高斯参数。Based on the first aspect, in a possible implementation manner, the image processing algorithm is executed by an image signal processor; and the parameters of the image processing algorithm include at least one of the following: each pixel of the image in the lens shading correction algorithm The distance from the optical center of the camera; the boundary coordinates of the neutral color region in the image in the white balance correction algorithm; the target brightness and target saturation in the tone mapping algorithm, and the filter kernel parameters used to generate the low-pass filtered image; contrast ratio The contrast threshold in the enhancement algorithm; the edge enhancement factor in the image edge enhancement algorithm; and the spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
基于第一方面,在一种可能的实现方式中,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数还包括:用于生成所述图像处理模型的神经网络的权重系数。Based on the first aspect, in a possible implementation manner, the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm further include: a neural network for generating the image processing model weight factor.
基于第一方面,在一种可能的实现方式中,所述第一样本图像数据的标注信息是人工标注的;以及所述方法还包括:将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。Based on the first aspect, in a possible implementation manner, the annotation information of the first sample image data is manually annotated; and the method further includes: converting the first sample image data into suitable Human-annotated color images.
基于第一方面,在一种可能的实现方式中,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。Based on the first aspect, in a possible implementation manner, the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects .
第二方面,本申请实施例提供一种用于图像处理的参数调节方法,该用于图像处理的参数调节方法包括:利用图像处理算法对第一样本图像数据进行图像处理,生成第一图像数据,其中,所述第一样本图像数据是通过第一摄像装置采集的;将所述第一图像数据输入至预先训练的图像检测模型,得到检测结果;比较所述检测结果和所述第一样本图像数据的标注信息之间的误差,得到比较结果;基于所述比较结果,调整所述图像处理算法的参数。In a second aspect, an embodiment of the present application provides a parameter adjustment method for image processing. The parameter adjustment method for image processing includes: using an image processing algorithm to perform image processing on first sample image data to generate a first image data, wherein the first sample image data is collected by a first camera; input the first image data into a pre-trained image detection model to obtain a detection result; compare the detection result with the first The error between the annotation information of a sample image data is obtained, and a comparison result is obtained; based on the comparison result, the parameters of the image processing algorithm are adjusted.
基于第二方面,在一种可能的实现方式中,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。Based on the second aspect, in a possible implementation manner, the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
基于第二方面,在一种可能的实现方式中,所述比较结果为误差,所述基于所述比较结果,迭代调整所述图像处理算法的参数,包括:基于所述检测结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括 所述图像处理算法中待调整的参数;基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代调整所述图像处理算法的参数。Based on the second aspect, in a possible implementation manner, the comparison result is an error, and the iteratively adjusting the parameters of the image processing algorithm based on the comparison result includes: based on the detection result and the first The error between the annotation information of the sample image data is used to construct a target loss function, wherein the target loss function includes the parameters to be adjusted in the image processing algorithm; based on the target loss function, a back-propagation algorithm is used. and a gradient descent algorithm that iteratively adjusts the parameters of the image processing algorithm.
基于第二方面,在一种可能的实现方式中,所述图像处理算法包括以下至少一项:暗电流校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、对比度增强、图像边缘增强或者图像降噪。Based on the second aspect, in a possible implementation manner, the image processing algorithm includes at least one of the following: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, and image edge enhancement Or image noise reduction.
基于第二方面,在一种可能的实现方式中,所述图像处理算法的参数包括以下至少一项:镜头阴影校正算法中的图像各个像素与摄像装置光心距离;白平衡校正算法中的中性色区域在图像中的边界坐标;阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;对比度增强算法中的对比度阈值;图像边缘增强算法中的边缘增强因子;以及图像降噪算法中的空间域高斯参数以及像素值域高斯参数。Based on the second aspect, in a possible implementation manner, the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera; The boundary coordinates of the color region in the image; the target brightness, target saturation and the filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge in the image edge enhancement algorithm enhancement factor; and the spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
基于第二方面,在一种可能的实现方式中,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数包括:用于生成所述图像处理模型的神经网络的权重系数。Based on the second aspect, in a possible implementation manner, the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: a neural network used to generate the image processing model. weight factor.
基于第二方面,在一种可能的实现方式中,所述第一样本图像数据的标注信息是人工标注的;以及所述方法还包括:将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。Based on the second aspect, in a possible implementation manner, the labeling information of the first sample image data is manually labelled; and the method further includes: converting the first sample image data into Human-annotated color images.
基于第二方面,在一种可能的实现方式中,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。Based on the second aspect, in a possible implementation manner, the image detection model is used to perform at least one of the following detection tasks: labeling a detection frame, identifying a target object, predicting a confidence level, and predicting a motion trajectory of the target object .
第三方面,本申请实施例提供一种图像检测装置,该图像检测装置包括:采集模块,被配置成通过第一摄像装置采集待检测的图像数据;处理模块,被配置成利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像;检测模块,被配置成将所述处理后的图像输入至图像检测模型,得到检测结果;其中,所述图像处理算法的参数是通过比较所述第一摄像装置所采集的第一样本图像数据的标注信息和所述图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的。In a third aspect, an embodiment of the present application provides an image detection device, the image detection device includes: a collection module configured to collect image data to be detected through a first camera device; a processing module configured to use an image processing algorithm to The image data to be detected is processed to generate a processed image; the detection module is configured to input the processed image into an image detection model to obtain a detection result; wherein, the parameters of the image processing algorithm are obtained through It is obtained by comparing the annotation information of the first sample image data collected by the first camera device with the detection result of the first sample image data by the image detection model, and adjusting based on the comparison result.
基于第三方面,在一种可能的实现方式中,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。Based on the third aspect, in a possible implementation manner, the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
基于第三方面,在一种可能的实现方式中,所述图像处理算法的参数是通过参数调整模块确定的,所述参数调整模块包括:比较子模块,被配置成将所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息进行比较以得到所述比较结果;调整子模块,被配置成基于所述比较结果迭代调整所述图像处理算法的参数;保存子模块,被配置成在预设条件满足时,保存所述图像处理算法的参数。Based on the third aspect, in a possible implementation manner, the parameters of the image processing algorithm are determined by a parameter adjustment module, and the parameter adjustment module includes: a comparison sub-module configured to The detection result of the image data is compared with the annotation information of the first sample image data to obtain the comparison result; the adjustment sub-module is configured to iteratively adjust the parameters of the image processing algorithm based on the comparison result; save the sub-module The module is configured to save the parameters of the image processing algorithm when the preset condition is satisfied.
基于第三方面,在一种可能的实现方式中,所述比较结果为误差,所述调整子模块进一步被配置成:基于所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括所述图像处理算法中待更新的参数;基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代更新所述图像处理算法的参数。Based on the third aspect, in a possible implementation manner, the comparison result is an error, and the adjustment sub-module is further configured to: based on the detection result of the first sample image data and the first sample The error between the annotation information of the image data is used to construct a target loss function, wherein the target loss function includes the parameters to be updated in the image processing algorithm; based on the target loss function, back-propagation algorithm and gradient descent are used. algorithm that iteratively updates the parameters of the image processing algorithm.
基于第三方面,在一种可能的实现方式中,所述图像处理算法包括以下至少一个图像处理流程:暗电流校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、对比度 增强、图像边缘增强和图像降噪。Based on the third aspect, in a possible implementation manner, the image processing algorithm includes at least one of the following image processing procedures: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, image Edge enhancement and image noise reduction.
基于第三方面,在一种可能的实现方式中,所述图像处理算法的参数包括以下至少一项:镜头阴影校正算法中的图像各个像素与摄像装置光心距离;白平衡校正算法中的中性色区域在图像中的边界坐标;阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;对比度增强算法中的对比度阈值;图像边缘增强算法中的边缘增强因子;以及图像降噪算法中的空间域高斯参数以及像素值域高斯参数。Based on the third aspect, in a possible implementation manner, the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera; The boundary coordinates of the color region in the image; the target brightness, target saturation and the filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge in the image edge enhancement algorithm enhancement factor; and the spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
基于第三方面,在一种可能的实现方式中,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数包括:用于生成所述图像处理模型的神经网络的权重系数。Based on the third aspect, in a possible implementation manner, the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: a neural network used to generate the image processing model. weight factor.
基于第三方面,在一种可能的实现方式中,所述第一样本图像数据的标注信息是人工标注的;以及所述方法还包括:将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。Based on the third aspect, in a possible implementation manner, the annotation information of the first sample image data is manually annotated; and the method further includes: converting the first sample image data into Human-annotated color images.
基于第三方面,在一种可能的实现方式中,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。Based on the third aspect, in a possible implementation manner, the image detection model is used to perform at least one of the following detection tasks: labeling a detection frame, identifying a target object, predicting a confidence level, and predicting a motion trajectory of the target object .
第四方面,本申请实施例提供一种电子设备,该电子设备包括:第一摄像装置,用于采集待检测的图像数据;图像信号处理器,用于利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像;人工智能处理器,用于将所述处理后的图像输入至图像检测模型,得到检测结果;其中,所述图像处理算法的参数是通过比较所述第一摄像装置所采集的第一样本图像数据的标注信息和所述图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的。In a fourth aspect, an embodiment of the present application provides an electronic device, the electronic device includes: a first camera device, configured to collect image data to be detected; an image signal processor, configured to use an image processing algorithm to detect the image data to be detected. The image data is processed to generate a processed image; an artificial intelligence processor is used to input the processed image into an image detection model to obtain a detection result; wherein, the parameters of the image processing algorithm are obtained by comparing the first It is obtained by adjusting the annotation information of the first sample image data collected by a camera device and the detection result of the first sample image data by the image detection model and based on the comparison result.
第五方面,本申请实施例提供一种图像检测装置,该图像检测装置包括一个或多个处理器和存储器;所述存储器耦合至所述处理器,所述存储器用于存储一个或多个程序;所述一个或多个处理器用于运行所述一个或多个程序,以实现如第一方面所述的方法。In a fifth aspect, an embodiment of the present application provides an image detection device, the image detection device includes one or more processors and a memory; the memory is coupled to the processor, and the memory is used to store one or more programs ; the one or more processors are configured to run the one or more programs to implement the method according to the first aspect.
第六方面,本申请实施例提供一种用于图像处理的参数调节装置,该用于图像处理的参数调节装置包括一个或多个处理器和存储器;所述存储器耦合至所述处理器,所述存储器用于存储一个或多个程序;所述一个或多个处理器用于运行所述一个或多个程序,以实现如第二方面所述的方法。In a sixth aspect, an embodiment of the present application provides a parameter adjustment apparatus for image processing, the parameter adjustment apparatus for image processing includes one or more processors and a memory; the memory is coupled to the processor, and the The memory is used to store one or more programs; the one or more processors are used to execute the one or more programs to implement the method according to the second aspect.
第七方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,该计算机程序被至少一个处理器执行时用于实现如第一方面或者第二方面所述的方法。In a seventh aspect, embodiments of the present application provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by at least one processor, is used to implement the first aspect or the second aspect the method described.
第八方面,本申请实施例提供一种计算机程序产品,该计算机程序产品被至少一个处理器执行时用于实现如第一方面或者第二方面所述的方法。In an eighth aspect, an embodiment of the present application provides a computer program product, which is used to implement the method according to the first aspect or the second aspect when the computer program product is executed by at least one processor.
应当理解的是,本申请的第二至八方面与本申请的第一方面的技术方案一致,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。It should be understood that the second to eighth aspects of the present application are consistent with the technical solutions of the first aspect of the present application, and the beneficial effects obtained by each aspect and the corresponding feasible implementation manner are similar, and will not be repeated here.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需 要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present application more clearly, the following briefly introduces the drawings that are used in the description of the embodiments of the present application. Obviously, the drawings in the following description are only some embodiments of the present application. , for those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative labor.
图1是本申请实施例提供的终端的一个结构示意图;1 is a schematic structural diagram of a terminal provided by an embodiment of the present application;
图2是本申请实施例提供的ISP与AI处理器结合执行图像处理流程的结构示意图;2 is a schematic structural diagram of an image processing process performed in combination with an ISP and an AI processor provided by an embodiment of the present application;
图3是本申请实施例提供的车辆的一个结构示意图;3 is a schematic structural diagram of a vehicle provided by an embodiment of the present application;
图4是本申请实施例提供的包含用于对图像处理算法进行参数调试的电子设备的系统架构示意图;4 is a schematic diagram of a system architecture including an electronic device for parameter debugging of an image processing algorithm provided by an embodiment of the present application;
图5是本申请实施例提供的图像处理算法的参数调试方法的一个流程图;5 is a flowchart of a parameter debugging method of an image processing algorithm provided by an embodiment of the present application;
图6是本申请实施例提供的图像处理算法的参数调试方法的一个具体应用流程图;6 is a specific application flowchart of the parameter debugging method of the image processing algorithm provided by the embodiment of the present application;
图7是本申请实施例提供的图像检测方法的一个流程图;7 is a flowchart of an image detection method provided by an embodiment of the present application;
图8是本申请实施例提供的图像处理装置的一个结构示意图;8 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application;
图9是本申请实施例提供的参数调试装置的一个结构示意图。FIG. 9 is a schematic structural diagram of a parameter debugging apparatus provided by an embodiment of the present application.
具体实施方式Detailed ways
下面结合本申请实施例中的附图对本申请实施例进行描述。以下描述中,参考形成本申请一部分并以说明之方式示出本申请实施例的具体方面或可使用本申请实施例的具体方面的附图。应理解,本申请实施例可在其它方面中使用,并可包括附图中未描绘的结构或逻辑变化。因此,以下详细描述不应以限制性的意义来理解,且本申请的范围由所附权利要求书界定。例如,应理解,结合所描述方法的揭示内容可以同样适用于用于执行所述方法的对应设备或系统,且反之亦然。例如,如果描述一个或多个具体方法步骤,则对应的设备可以包含如功能单元等一个或多个单元,来执行所描述的一个或多个方法步骤(例如,一个单元执行一个或多个步骤,或多个单元,其中每个都执行多个步骤中的一个或多个),即使附图中未明确描述或说明这种一个或多个单元。另一方面,例如,如果基于如功能单元等一个或多个单元描述具体装置,则对应的方法可以包含一个步骤来执行一个或多个单元的功能性(例如,一个步骤执行一个或多个单元的功能性,或多个步骤,其中每个执行多个单元中一个或多个单元的功能性),即使附图中未明确描述或说明这种一个或多个步骤。进一步,应理解的是,除非另外明确提出,本文中所描述的各示例性实施例和/或方面的特征可以相互组合。The embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application. In the following description, reference is made to the accompanying drawings which form a part of this application and which illustrate, by way of illustration, specific aspects of embodiments of the application, or specific aspects of which embodiments of the application may be used. It should be understood that the embodiments of the present application may be utilized in other aspects and may include structural or logical changes not depicted in the accompanying drawings. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the application is defined by the appended claims. For example, it should be understood that disclosures in connection with a described method may equally apply to a corresponding apparatus or system for performing the described method, and vice versa. For example, if one or more specific method steps are described, the corresponding apparatus may include one or more units, such as functional units, to perform one or more of the described method steps (eg, one unit performs one or more steps) , or units, each of which performs one or more of the steps), even if such unit or units are not explicitly described or illustrated in the figures. On the other hand, if, for example, a specific apparatus is described based on one or more units, such as functional units, the corresponding method may contain a step to perform the functionality of the one or more units (eg, a step to perform the one or more units) functionality, or steps, each of which performs the functionality of one or more of the plurality of units), even if such one or more steps are not explicitly described or illustrated in the figures. Further, it is to be understood that the features of the various exemplary embodiments and/or aspects described herein may be combined with each other unless expressly stated otherwise.
本申请所述的图像检测方法,可以应用于计算机视觉领域、需要将其他拍摄设备采集的样本图像训练得到的图像检测模型与新的拍摄设备结合的场景中。The image detection method described in the present application can be applied in the field of computer vision, in a scene where an image detection model obtained by training sample images collected by other photographing equipment needs to be combined with a new photographing equipment.
请参考图1,图1示出了本申请实施例提供的电子设备的一个结构示意图。如图1所示,电子设备100可以是一个用户设备(User Equipment,UE),如手机、平板电脑、智能屏幕或者图像拍摄设备等各种类型的设备。此外,电子设备100还可以是车辆。在电子设备100中可以设置有摄像装置101,以用于采集图像数据。此外,电子设备100还可以包括或集成于电子设备内的模组、芯片、芯片组、电路板或部件,该芯片或芯片组或搭载有芯片或芯片组的电路板可在必要的软件驱动下工作。电子设备100包括一个或多个处理器,例如图像信号处理器(ISP,Image Signal Processor)102和AI处理器103。可选地,所述一个或多个处理器可以集成在一个或多个芯片内,该一个或多个芯片可以被视为是一个芯 片组,当一个或多个处理器被集成在同一个芯片内时该芯片也叫片上系统(System on a Chip,SOC)。在所述一个或多个处理器之外,电子设备100还包括一个或多个其他必要部件,例如存储器等。Please refer to FIG. 1 , which shows a schematic structural diagram of an electronic device provided by an embodiment of the present application. As shown in FIG. 1 , the electronic device 100 may be a user equipment (User Equipment, UE), such as various types of devices such as a mobile phone, a tablet computer, a smart screen, or an image capturing device. In addition, the electronic device 100 may also be a vehicle. A camera 101 may be provided in the electronic device 100 for capturing image data. In addition, the electronic device 100 may also include or be integrated into a module, chip, chip set, circuit board or component in the electronic device, and the chip or chip set or the circuit board equipped with the chip or chip set can be driven by necessary software Work. The electronic device 100 includes one or more processors, such as an image signal processor (ISP, Image Signal Processor) 102 and an AI processor 103 . Optionally, the one or more processors can be integrated in one or more chips, and the one or more chips can be regarded as a chipset, when one or more processors are integrated in the same chip The chip is also called a system on a chip (SOC). In addition to the one or more processors, the electronic device 100 also includes one or more other necessary components, such as memory and the like.
如图1所示的摄像装置101,可以为单目摄像头。或者,摄像装置101还可以包括多目摄像头,这些摄像头可以在物理上合设于一个摄像装置中,还可以在物理上分设于多个摄像装置中。通过多目摄像头在同一时刻拍摄多张图像,并可以根据这些图像进行处理,得到一张待检测的图像。当然,摄像装置101还可以为其他情况,本申请实施例不做具体限定。具体实现中,摄像装置101可以实时采集图像数据,或者周期性地采集图像数据。该周期如3s、5s、10s等。摄像装置101还可以通过其他方式采集图像数据,本申请实施例不做具体限定。摄像装置101采集到图像数据后,可以将图像数据传递给ISP102。The camera device 101 shown in FIG. 1 may be a monocular camera. Alternatively, the camera device 101 may further include multi-camera cameras, and these cameras may be physically combined in one camera device, or may be physically separated into multiple camera devices. Multiple images are captured at the same time by a multi-eye camera, and can be processed according to these images to obtain an image to be detected. Of course, the camera 101 may also be in other situations, which are not specifically limited in the embodiments of the present application. In a specific implementation, the camera 101 can collect image data in real time, or collect image data periodically. The period is such as 3s, 5s, 10s and so on. The camera device 101 may also collect image data in other ways, which are not specifically limited in this embodiment of the present application. After the camera 101 collects the image data, it can transmit the image data to the ISP 102 .
如图1所示的ISP102,可以设置多个硬件模块或者运行必要的软件程序以对图像数据进行处理以及与AI处理器103进行通信。ISP102可以单独作为一个部件或集成于其他数字逻辑器件中,该数字逻辑器件包括但不限于:CPU(中央处理器,Central Processing Unit)、GPU(图形处理器,Graphics Processing Unit)或者DSP(数字信号处理器,Digital Signal Processing)。示例性地,该CPU、GPU和DSP都是片上系统内的处理器。ISP102可以执行多个图像处理过程,该多个图像处理过程可以包括但不限于:暗电流校正、响应非线性校正、阴影校正、解马赛克、白平衡校正、阶调映射、对比度增强、边缘增强、降噪、颜色校正等。The ISP 102 shown in FIG. 1 can set up a plurality of hardware modules or run necessary software programs to process the image data and communicate with the AI processor 103 . ISP102 can be used alone as a component or integrated in other digital logic devices, including but not limited to: CPU (Central Processing Unit, Central Processing Unit), GPU (Graphics Processing Unit, Graphics Processing Unit) or DSP (Digital Signal Processing Unit) processor, Digital Signal Processing). Illustratively, the CPU, GPU and DSP are all processors within a system-on-chip. ISP102 can perform multiple image processing processes, which may include but are not limited to: dark current correction, response nonlinearity correction, shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, edge enhancement, Noise reduction, color correction, and more.
需要说明的是,ISP102通过运行图像处理算法以执行上述多个图像处理过程。上述多个图像处理过程中的每一个图像处理过程可以看作是独立的图像处理过程,从而,用于执行每一个图像处理过程的图像处理算法可以看作是独立的。基于此,ISP102可以包括多个逻辑模块。例如包括但不限于:暗电流校正模块、响应非线性校正模块、阴影校正模块、解马赛克模块等。每一个逻辑模块用于执行一种图像检测过程。每一个逻辑模块可以各自使用其特定的硬件结构,多个逻辑模块也可以共用一套硬件结构,本申请实施例对此不做限定。此外,该一个或多个图像处理过程通常顺序执行。例如,摄像装置101获取的图像数据提供至ISP后,可以依次执行暗电流校正、响应非线性校正、阴影矫正、解马赛克、白平衡校正…等处理过程。需要说明的是,本申请实施例对ISP所执行的图像处理过程的先后顺序不作限定,例如,可以先执行白平衡校正,再执行解马赛克。It should be noted that the ISP 102 executes the above-mentioned multiple image processing processes by running the image processing algorithm. Each image processing process in the above-mentioned multiple image processing processes can be regarded as an independent image processing process, and thus, the image processing algorithm for executing each image processing process can be regarded as independent. Based on this, the ISP 102 may include multiple logic modules. For example, it includes, but is not limited to, a dark current correction module, a response nonlinearity correction module, a shading correction module, a demosaicing module, and the like. Each logic module is used to perform an image detection process. Each logic module may use its own specific hardware structure, and multiple logic modules may also share a set of hardware structures, which is not limited in this embodiment of the present application. Furthermore, the one or more image processing processes are typically performed sequentially. For example, after the image data acquired by the camera 101 is provided to the ISP, processing procedures such as dark current correction, response nonlinear correction, shading correction, demosaicing, white balance correction, etc. may be sequentially performed. It should be noted that this embodiment of the present application does not limit the sequence of the image processing processes performed by the ISP. For example, white balance correction may be performed first, and then demosaicing may be performed.
如图1所示的AI处理器103,可以包括神经网络处理器(Neural-network Processing Unit,NPU)等专用神经处理器,包括但不限于卷积神经网络处理器、张量处理器或神经处理引擎。AI处理器可以单独作为一个部件或集成于其他数字逻辑器件中,该数字逻辑器件包括但不限于:CPU、GPU或者DSP。AI处理器103可以运行有图像检测模型,该图像检测模型是基于样本图像数据集S1对深度神经网络训练得到的。该图像检测模型可以执行特定的检测任务。该特定的检测任务可以包括但不限于:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测或者图像分割等。需要说明的是,图像检测模型是在离线端训练完成后部署在图1所示的AI处理器103中的。这里的离线端可以看作是服务器设备或者用于进行模型训练的设备。The AI processor 103 shown in FIG. 1 may include a special neural processor such as a neural network processor (Neural-network Processing Unit, NPU), including but not limited to a convolutional neural network processor, a tensor processor, or a neural processing unit. engine. The AI processor can be used alone as a component or integrated in other digital logic devices, including but not limited to: CPU, GPU or DSP. The AI processor 103 may run an image detection model, and the image detection model is obtained by training a deep neural network based on the sample image data set S1. This image detection model can perform specific detection tasks. The specific detection task may include, but is not limited to, labeling of detection frames, recognition of target objects, prediction of confidence levels, prediction of motion trajectories of target objects, or image segmentation, and the like. It should be noted that the image detection model is deployed in the AI processor 103 shown in FIG. 1 after the offline end training is completed. The offline end here can be regarded as a server device or a device for model training.
在一种可能的实现方式中,AI处理器103除了运行有图像检测模型以执行图像检测操作外,AI处理器103还可以执行一种或多种图像处理操作,该一种或多种图像处理操 作可以包括但不限于:解马赛克、白平衡校正、阶调映射、对比度增强、边缘增强、降噪、颜色校正。此时,AI处理器103中还可以运行有一种或多种图像处理模型,其中每一种图像处理模型用于执行某种特定的图像处理过程。在该可能的实现方式中,从摄像装置101获取的图像数据可以经过多个图像处理过程以生成最终的图像处理结果,AI处理器103可以执行上述图像处理过程中的一个过程或多个过程,也即对应上述一种或多种图像处理操作,ISP102也可以执行上述图像处理过程中的一个过程或多个过程。其中,AI处理器103可以与ISP102执行不同的图像处理过程,此外,AI处理器103与ISP102也可以执行相同的图像处理过程,例如进行进一步的增强处理,本实施例对此不限定。在该可能的实现方式中,每一种图像处理模型均可以是基于样本图像数据集S3,采用机器学习的方法对神经网络训练得到的。该样本图像数据集S3中包括:多个样本图像数据H,与多个样本图像数据H中的每一个样本图像数据H对应的参考图像数据I。参考图像数据I用于进行图像数据的比对,与参考图像数据I进行比对的图像数据为:待训练的神经网络对样本图像数据H进行图像处理后得到的图像数据。需要说明的是,参考图像数据I和其所对应的样本图像数据H呈现有相同的场景。这里所述的呈现有相同的场景可以理解为:参考图像数据I和其所对应的样本图像数据H呈现的目标对象以及目标对象在图像中的位置均相同。此外,图像处理模型所执行的图像处理过程不同,用于训练图像处理模型的样本图像数据集S3也不同。例如,当图像处理模型用于执行解马赛克的图像处理流程时,样本图像数据集S3中所包括的每一个样本图像数据H为a*b*1尺寸的单通道原始图像格式(RAW,Raw Image Format)的图像数据,与每一个样本图像数据对应的每一个参考图像数据I为a*b*3尺寸的RGB图像数据,其中,a为图像的纵向像素值,b为图像的横向像素值,1为单通道(例如R通道、G通道或者B通道),3为三通道(RGB通道)。再例如,当图像处理模型用于执行白平衡校正的图像处理流程时,样本图像数据集S3中所包括的每一个样本图像数据H为a*b*1尺寸的单通道RAW图像数据,与每一个样本图像数据对应的每一个参考图像数据I为a*b*1尺寸的单通道图像数据,样本图像数据集S3。参考图像数据I和其所对应的样本图像数据H具有呈现有相同的场景但具有不同的白平衡值。下面以图像处理模型执行解马赛克的图像处理过程为例,对图像处理模型的训练过程进行介绍。将样本图像数据集S3中所包括的每一个样本图像数据H分别输入至待训练的神经网络,得到处理后的图像数据。基于处理后的图像数据与参考图像数据I之间的误差,构建损失函数。该损失函数例如可以包括但不限于:平均绝对误差(MAE)损失函数或者均方误差(MSE)损失函数。该损失函数中包括待训练的神经网络中各层网络的权重系数。基于所构建的损失函数,利用反向传播算法和梯度下降算法,迭代调整待训练的神经网络中各层网络的权重系数值,直到图像处理模型所输出的处理后的图像数据与参考图像数据I之间的误差小于等于预设阈值,或者迭代次数小于预设阈值,保存待训练的神经网络中各层网络的权重系数值。此时,该待训练的神经网络即为图像处理模型。In a possible implementation manner, in addition to running an image detection model to perform image detection operations, the AI processor 103 may also perform one or more image processing operations, the one or more image processing operations Operations may include, but are not limited to: demosaicing, white balance correction, tone mapping, contrast enhancement, edge enhancement, noise reduction, color correction. At this time, the AI processor 103 may also run one or more image processing models, where each image processing model is used to execute a specific image processing process. In this possible implementation, the image data obtained from the camera 101 may undergo multiple image processing processes to generate a final image processing result, and the AI processor 103 may perform one or more of the above image processing processes, That is, corresponding to one or more of the above-mentioned image processing operations, the ISP 102 may also perform one or more of the above-mentioned image processing procedures. The AI processor 103 and the ISP 102 may perform different image processing processes. In addition, the AI processor 103 and the ISP 102 may also perform the same image processing process, such as performing further enhancement processing, which is not limited in this embodiment. In this possible implementation manner, each image processing model may be obtained by training a neural network by using a machine learning method based on the sample image data set S3. The sample image data set S3 includes: a plurality of sample image data H, and reference image data I corresponding to each sample image data H in the plurality of sample image data H. The reference image data I is used for image data comparison, and the image data to be compared with the reference image data I is: image data obtained after the neural network to be trained performs image processing on the sample image data H. It should be noted that the reference image data I and its corresponding sample image data H present the same scene. The same scene presented here can be understood as: the target objects presented by the reference image data I and the corresponding sample image data H and the positions of the target objects in the image are all the same. In addition, the image processing process performed by the image processing model is different, and the sample image dataset S3 used for training the image processing model is also different. For example, when the image processing model is used to perform the image processing flow of demosaicing, each sample image data H included in the sample image data set S3 is a single-channel raw image format of a*b*1 size (RAW, Raw Image Format) image data, each reference image data I corresponding to each sample image data is the RGB image data of a*b*3 size, wherein, a is the vertical pixel value of the image, and b is the horizontal pixel value of the image, 1 is single channel (eg R channel, G channel or B channel), 3 is three channel (RGB channel). For another example, when the image processing model is used to perform the image processing flow of white balance correction, each sample image data H included in the sample image data set S3 is single-channel RAW image data of a*b*1 size, which is the same as each sample image data H. Each reference image data I corresponding to one sample image data is single-channel image data of a*b*1 size, the sample image data set S3. The reference image data I and its corresponding sample image data H have the same scene but different white balance values. The following describes the training process of the image processing model by taking the image processing process in which the image processing model performs demosaicing as an example. Each sample image data H included in the sample image data set S3 is respectively input to the neural network to be trained to obtain processed image data. Based on the error between the processed image data and the reference image data I, a loss function is constructed. The loss function may include, but is not limited to, a mean absolute error (MAE) loss function or a mean square error (MSE) loss function, for example. The loss function includes the weight coefficients of each layer of the neural network to be trained. Based on the constructed loss function, the back-propagation algorithm and the gradient descent algorithm are used to iteratively adjust the weight coefficient values of each layer network in the neural network to be trained, until the processed image data output by the image processing model and the reference image data I The error between them is less than or equal to the preset threshold, or the number of iterations is less than the preset threshold, and the weight coefficient values of each layer of the neural network to be trained are saved. At this point, the neural network to be trained is an image processing model.
当ISP102和AI处理器103相结合的方式执行图像处理过程时,ISP102可以设置有多个端口,同样AI处理器103也可以设置有多个端口,ISP102可以将处理后的图像数据A通过多个端口中的一个端口提供至AI处理器103,AI处理器103对图像数据A处理后生成图像数据B,将图像数据B通过多个端口中的一个端口提供至ISP102。以AI处理器103执行解马赛克为例,结合图2,对ISP102和AI处理器103相结合的方式进行描述。 在图2中,ISP102从摄像装置101获取图像数据,对所获取的图像数据执行暗电流校正、响应非线性校正、阴影矫正该三个图像处理流程后生成图像数据A,通过端口Vio提供至AI处理器103的输入端口Vai。AI处理器103所运行的图像处理模型对图像数据A进行解马赛克处理后生成图像数据B,将图像数据B通过输出端口Vao提供至ISP102的输入端口Vii。ISP102对输入端口Vii输入的图像数据B进行后续的白平衡校正、颜色校正等图像处理流程,生成图像数据C输入至AI处理器103。AI处理器103所运行的图像检测模型可以对图像数据C执行图像检测处理。When the ISP102 and the AI processor 103 are combined to perform the image processing process, the ISP102 can be set with multiple ports, and the AI processor 103 can also be set with multiple ports. The ISP102 can pass the processed image data A through multiple ports. One of the ports is provided to the AI processor 103 , the AI processor 103 processes the image data A to generate image data B, and provides the image data B to the ISP 102 through one of the multiple ports. Taking the AI processor 103 performing demosaicing as an example, the combination of the ISP 102 and the AI processor 103 will be described with reference to FIG. 2 . In FIG. 2 , the ISP 102 acquires image data from the camera 101, performs the three image processing processes of dark current correction, response nonlinear correction, and shading correction on the acquired image data, and generates image data A, which is provided to AI through port Vio The input port Vai of the processor 103 . The image processing model run by the AI processor 103 performs demosaic processing on the image data A to generate image data B, and provides the image data B to the input port Vii of the ISP 102 through the output port Vao. The ISP 102 performs subsequent image processing procedures such as white balance correction and color correction on the image data B input through the input port Vii, and generates image data C that is input to the AI processor 103 . The image detection model run by the AI processor 103 can perform image detection processing on the image data C.
基于ISP102和AI处理器103相结合的图像处理方式,AI处理器103可以包括一个或多个,当AI处理器103包括一个时,用于执行图像处理的图像处理模型和用于执行图像检测的图像检测模型可以设置于同一个AI处理器103中,当AI处理器103包括多个时,用于执行图像处理的图像处理模型和用于执行图像检测的图像检测模型可以设置于不同的AI处理器103中。Based on the image processing method in which the ISP 102 and the AI processor 103 are combined, the AI processor 103 may include one or more. When the AI processor 103 includes one, the image processing model for performing image processing and the image processing model for performing image detection The image detection model can be set in the same AI processor 103. When there are multiple AI processors 103, the image processing model used to perform image processing and the image detection model used to perform image detection can be set to different AI processing models. in the device 103.
本申请实施例中所述的ISP102所运行的图像处理算法中的各参数以及AI处理器103中所运行的图像处理模型中的各参数,是基于摄像装置101所采集的样本图像数据集S2以及图像检测模型的图像检测结果调试得到的。其中,对ISP102所运行的图像处理算法和图像处理模型中的各参数的调试方法具体参考下文中图5所示的实施例。The parameters in the image processing algorithm run by the ISP 102 and the parameters in the image processing model run by the AI processor 103 described in the embodiments of the present application are based on the sample image data set S2 collected by the camera 101 and The image detection results of the image detection model are obtained by debugging. Wherein, for the image processing algorithm run by the ISP 102 and the debugging method of each parameter in the image processing model, refer to the embodiment shown in FIG. 5 below.
通常,用于对AI处理器103中所运行的图像检测模型进行训练的样本图像数据集S1是通过大数据搜集的,用于采集样本图像数据集S1的摄像装置与图1所示的摄像装置101为不同的摄像装置。由于不同的摄像装置在制造工艺、光电响应函数、噪声水平等特性上存在较为显著的差异,使得样本图像数据集S1中的样本图像数据D的风格与对摄像装置101所采集的图像数据进行图像处理后得到的图像的风格存在差异,进而导致摄像装置101所采集的图像数据与样本图像数据集S1中的样本图像数据D在高维度空间中的特征分布存在显著的差异,导致图像检测模型部署在AI处理器103中时,对摄像装置101采集的图像数据进行检测过程中,检测结果与真实结果之间偏差较大,降低了部署在AI处理器103中的图像检测模型检测的准确性。Usually, the sample image data set S1 used for training the image detection model running in the AI processor 103 is collected through big data, and the camera used for collecting the sample image data set S1 is the same as the camera shown in FIG. 1 . 101 is a different camera device. Since there are significant differences in the manufacturing process, photoelectric response function, noise level and other characteristics of different camera devices, the style of the sample image data D in the sample image data set S1 is different from the image data collected by the camera device 101. There are differences in the styles of the images obtained after processing, which in turn leads to significant differences in the feature distributions in the high-dimensional space between the image data collected by the camera device 101 and the sample image data D in the sample image data set S1, resulting in the deployment of image detection models. In the AI processor 103 , during the detection process of the image data collected by the camera 101 , the deviation between the detection result and the real result is relatively large, which reduces the detection accuracy of the image detection model deployed in the AI processor 103 .
基于此,本申请实施例通过保持图像检测模型的参数不变,对用于执行多个图像处理过程的图像处理算法的参数进行调整(或者对用于执行多个图像处理过程的图像处理算法的参数以及图像处理模型的参数进行调整),使得对摄像装置101采集的图像进行图像处理后所得到的图像的风格,与对样本图像数据集S1中的样本图像数据D的风格相一致,从而降低摄像装置101所采集的图像数据与样本图像数据集S1中的样本图像数据D在高维度空间中的特征分布之间的差异,有利于提高图像检测模型推理的准确性。Based on this, by keeping the parameters of the image detection model unchanged, the embodiment of the present application adjusts the parameters of the image processing algorithms used for executing multiple image processing processes (or adjusts the parameters of the image processing algorithms used for executing multiple image processing processes). parameters and parameters of the image processing model), so that the style of the image obtained by performing image processing on the image collected by the camera 101 is consistent with the style of the sample image data D in the sample image data set S1, thereby reducing the The difference between the image data collected by the camera 101 and the feature distribution of the sample image data D in the sample image data set S1 in the high-dimensional space is conducive to improving the accuracy of the image detection model inference.
传统计算机视觉技术中,为了让已经训练的图像检测模型适配新的摄像装置,以提高图像检测模型检测结果的准确性,通常需要使用新的数据集对已经训练的图像检测模型进行重训练或微调。然而,采集以及标注新数据集的过程需要耗费大量的人力物力,且在对图像检测模型进行重训练的过程中必然需要图像检测模型“遗忘”一些已经学习到的知识,导致了历史数据无法被最大化地利用。本申请实施例通过保持已训练完毕的图像检测模型的参数不变,通过调整用于执行多个图像处理过程的图像处理算法的参数(或者调整用于执行多个图像处理过程的图像处理算法的参数以及图像处理模型的参数),来降低摄像装置101所采集的图像数据与样本图像数据D在高维度空间中的特征分布之间的差异,与 传统技术中采用对图像检测模型进行重训练的方式相比,本申请实施例可以不需要对已训练的图像检测模型进行任何修改,节省了对图像检测模型进行重训练和微调所需的时间和算力开销;此外,由于本申请实施例是对用于执行多个图像处理过程的图像处理算法的参数进行调整,由于图像处理算法不执行图像检测过程,因此采用较少的训练样本即可完成训练,从而可以降低对训练样本进行人工标注的数量,缩短将图像检测模型与新摄像装置结合使用时的调试周期。In traditional computer vision technology, in order to adapt the trained image detection model to a new camera device and improve the accuracy of the detection results of the image detection model, it is usually necessary to use a new data set to retrain or retrain the trained image detection model. Fine tune. However, the process of collecting and labeling new data sets requires a lot of manpower and material resources, and in the process of retraining the image detection model, the image detection model must "forget" some of the knowledge it has learned, resulting in historical data cannot be used. Make the most of it. In this embodiment of the present application, by keeping the parameters of the trained image detection model unchanged, by adjusting the parameters of the image processing algorithms used for executing multiple image processing processes (or adjusting the parameters of the image processing algorithms used for executing multiple image processing processes) parameters and the parameters of the image processing model) to reduce the difference between the image data collected by the camera 101 and the feature distribution of the sample image data D in the high-dimensional space, which is different from the traditional technique of retraining the image detection model. Compared with other methods, the embodiment of the present application does not require any modification to the trained image detection model, which saves the time and computing power overhead required for retraining and fine-tuning the image detection model; Adjust the parameters of the image processing algorithm used to perform multiple image processing processes. Since the image processing algorithm does not perform the image detection process, the training can be completed with fewer training samples, thereby reducing the need for manual labeling of training samples. quantity, shortening the commissioning cycle when combining image detection models with new cameras.
下面以自动驾驶场景为例,结合图1所示的电子设备100的结构示意图,对本申请实施例的应用场景进行更为具体的说明。请参考图3,图3示出了本申请实施例提供的车辆300的一个结构示意图。Taking an automatic driving scenario as an example, the following describes the application scenario of the embodiment of the present application in more detail in conjunction with the schematic structural diagram of the electronic device 100 shown in FIG. 1 . Please refer to FIG. 3 , which shows a schematic structural diagram of a vehicle 300 provided by an embodiment of the present application.
耦合到车辆300或包括在车辆300中的组件可以包括控制系统10、推进系统20和传感器系统30。应理解,车辆300还可以包括更多的系统,在此不再赘述。控制系统10可被配置为控制车辆300及其组件的操作。如图1所示的ISP102和AI处理器103可以设置于控制系统10中,此外,控制系统10还可以包括中央处理器、存储器等设备,存储器用于存储各处理器运行所需的指令和数据。推进系统20可以用于车辆300提供动力运动,其可以包括但不限于:引擎/发动机、能量源、传动装置和车轮。传感器系统104可以包括但不限于:全球定位系统、惯性测量单元、激光雷达传感器或者毫米波雷达传感器,如图1所示的摄像装置101可以设置于传感器系统30。车辆300的组件和系统可通过系统总线、网络和/或其它连接机制耦合在一起,以与在其各自的系统内部和/或外部的其它组件互连的方式工作。具体工作中,车辆300中的各组件之间相互配合,实现多种自动驾驶功能。该自动驾驶功能可以包括但不限于:盲点检测、泊车辅助或者变道辅助等。Components coupled to or included in vehicle 300 may include control system 10 , propulsion system 20 , and sensor system 30 . It should be understood that the vehicle 300 may also include more systems, which will not be repeated here. The control system 10 may be configured to control the operation of the vehicle 300 and its components. The ISP 102 and the AI processor 103 shown in FIG. 1 can be set in the control system 10. In addition, the control system 10 can also include devices such as a central processing unit, a memory, and the like, and the memory is used to store the instructions and data required for the operation of each processor . Propulsion system 20 may be used for vehicle 300 to provide powered motion, which may include, but is not limited to, an engine/motor, energy source, transmission, and wheels. The sensor system 104 may include, but is not limited to, a global positioning system, an inertial measurement unit, a lidar sensor, or a millimeter-wave radar sensor. The camera device 101 shown in FIG. 1 may be provided in the sensor system 30 . The components and systems of vehicle 300 may be coupled together through a system bus, network, and/or other connection mechanism to operate in interconnection with other components within and/or outside of their respective systems. In specific work, various components in the vehicle 300 cooperate with each other to realize various automatic driving functions. The automatic driving function may include, but is not limited to, blind spot detection, parking assist or lane change assist, and the like.
在实现上述自动驾驶功能的过程中,摄像装置101可以周期性的采集图像数据,将采集到的图像数据提供至ISP102。ISP102(或者ISP102与AI处理器103中的图像处理模型)通过执行多个图像处理过程对图像数据进行处理,转换成AI处理器103中所运行的图像检测模型可以识别或计算的图像数据提供至AI处理器103,从而使得AI处理器103实现特定任务的推理或检测,生成检测结果。控制系统10中的其他组件(例如执行决策的CPU)基于AI处理器103的检测结果,控制其他设备或组件执行自动驾驶功能。In the process of realizing the above automatic driving function, the camera device 101 may periodically collect image data, and provide the collected image data to the ISP 102 . The ISP102 (or the image processing model in the ISP102 and the AI processor 103) processes the image data by performing multiple image processing processes, and converts it into image data that can be recognized or calculated by the image detection model running in the AI processor 103. The AI processor 103 enables the AI processor 103 to realize the reasoning or detection of a specific task, and generate the detection result. Other components in the control system 10 (for example, a CPU that executes decisions) control other devices or components to perform automatic driving functions based on the detection results of the AI processor 103 .
通常,生产车辆的厂商有可能某些部件不自己生产,通过厂商A订购已经训练完毕的图像检测模型,通过厂商B订购摄像装置,为了使得图像检测模型可以更加准确的对摄像装置获取的图像数据进行检测,可以采用本申请实施例所述的训练方法对用于执行图像处理流程的图像处理算法或者图像处理模型的参数进行调试。再例如,厂商对某些型号的车辆进行升级换代,需要更换与之前配置的摄像装置不同型号的摄像装置。此时,为了将所更换的摄像装置与图像检测模型相匹配,使得图像检测模型对新摄像装置获取的图像数据进行检测的检测结果更加准确,同样可以采用本申请实施例所述的训练方法对用于执行图像处理流程的图像处理算法或者图像处理模型的参数进行调试。Usually, the manufacturer of the vehicle may not produce some parts by itself. The trained image detection model is ordered through manufacturer A, and the camera device is ordered through manufacturer B. In order to make the image detection model more accurate for the image data obtained by the camera device For detection, the training method described in the embodiments of the present application can be used to debug the parameters of the image processing algorithm or the image processing model used to execute the image processing flow. For another example, when a manufacturer upgrades certain models of vehicles, it is necessary to replace a camera device of a different model from the previously configured camera device. At this time, in order to match the replaced camera device with the image detection model, so that the detection result of the image detection model on the image data obtained by the new camera device is more accurate, the training method described in the embodiments of the present application can also be used to Debug the parameters of the image processing algorithm or image processing model used to execute the image processing flow.
需要说明的是,对图像处理算法或者图像处理模型的参数调试可以是在离线端完成的(或者说是在服务器或者用于进行模型训练的设备中训练完成的)。当图像处理算法的参数调试完毕后,可以将图像处理算法部署在终端的ISP中。当某些图像处理过程由图像处理模型执行时,对图像处理模型中的参数调试完毕后,可以将图像处理模型部署在AI处理器103中。请参考图4,其示出了本申请实施例提供的包含用于对图像处理算法进行参 数调试的电子设备的系统架构示意图400。It should be noted that the parameter debugging of the image processing algorithm or the image processing model may be completed at the offline end (or in other words, the training is completed in the server or the device used for model training). After the parameters of the image processing algorithm are debugged, the image processing algorithm can be deployed in the ISP of the terminal. When some image processing processes are performed by the image processing model, after the parameters in the image processing model are debugged, the image processing model can be deployed in the AI processor 103 . Please refer to FIG. 4 , which shows a schematic diagram 400 of a system architecture including an electronic device for parameter debugging of an image processing algorithm provided by an embodiment of the present application.
在图4中,系统架构400包括摄像装置101、参数调试设备401、存储设备402和显示设备403。摄像装置101用于采集多个样本图像数据E,以及将所采集的样本图像数据E存储至存储设备402。其中,摄像装置101与图1所示的摄像装置101为相同的(或者同一个)摄像装置。存储设备402可以包括但不限于:只读存储器或者随机存取存储器等。其用于存储样本图像数据E。此外,存储设备402中还可以存储有用于执行图像处理过程的图像处理算法的可执行程序和数据,以及用于执行图像检测的图像检测模型的可执行程序和数据。参数调试设备401可以运行图像处理算法和图像检测模型,参数调试设备401还可以从存储设备101中调用样本图像数据E、用于执行像处理过程的图像处理算法的可执行程序和数据以及用于执行图像检测的图像检测模型的可执行程序和数据,以对图像处理算法的参数进行调试。另外,参数调试设备401还可以将运行产生的数据以及每次对图像处理算法的参数调试后的调试结果存储至存储设备402。此外,参数调试设备401和存储设备402还可以设置有I/O端口,以与显示设备403进行数据交互。显示设备403中可以包括屏幕等显示装置,以对样本图像数据E进行标注。具体来说,参数调试设备401可以从存储设备402中获取样本图像数据E,对样本图像数据E进行图像处理后提供至显示设备403,以在显示设备403中呈现。用户通过显示设备403将对样本图像数据E进行标注,将样本图像数据E的标注信息存储至存储设备402。In FIG. 4 , the system architecture 400 includes a camera device 101 , a parameter debugging device 401 , a storage device 402 and a display device 403 . The camera 101 is used for collecting a plurality of sample image data E, and storing the collected sample image data E in the storage device 402 . The imaging device 101 and the imaging device 101 shown in FIG. 1 are the same (or the same) imaging device. The storage device 402 may include, but is not limited to, read-only memory or random access memory, and the like. It is used to store sample image data E. In addition, the storage device 402 may also store executable programs and data of an image processing algorithm for executing the image processing process, and executable programs and data of an image detection model for executing the image detection. The parameter debugging device 401 can run the image processing algorithm and the image detection model, and the parameter debugging device 401 can also call the sample image data E, the executable program and data of the image processing algorithm for executing the image processing process, and the data for the image processing from the storage device 101. The executable program and data of the image detection model that performs image detection to debug the parameters of the image processing algorithm. In addition, the parameter debugging device 401 may also store the data generated by the operation and the debugging result after each parameter debugging of the image processing algorithm into the storage device 402 . In addition, the parameter debugging device 401 and the storage device 402 may also be provided with I/O ports for data interaction with the display device 403 . The display device 403 may include a display device such as a screen to mark the sample image data E. Specifically, the parameter debugging device 401 may acquire sample image data E from the storage device 402 , perform image processing on the sample image data E, and provide the sample image data E to the display device 403 for presentation in the display device 403 . The user will mark the sample image data E through the display device 403 , and store the marking information of the sample image data E in the storage device 402 .
在本申请实施例中,为了便于图像处理算法对样本图像数据E的处理以及图像检测模型对图像的检测,摄像装置101输出的样本图像数据E为高位深(例如16bit、20bit或者24bit)单通道线性RAW图像数据,其动态范围远大于显示器所能显示的动态范围,此外,样本图像数据E是彩色滤波阵列(CFA,color filter array)图像,其不具备颜色信息,因此,标注人员难以从摄像装置101输出的样本图像数据E中识别出各个目标对象。基于此,本申请实施例中,参数调试设备401还运行有图像处理算法T,该图像处理算法T用于对样本图像数据E进行处理以生成可以在显示器中呈现且亮度和色彩均合适的彩色图像,比如RGB图像,以便于标注人员对样本图像数据E中呈现的目标对象进行标注。该图像处理算法T所执行的图像处理流程可以包括但不限于:系统误差校正、全局阶调映射、解马赛克或者白平衡校正。该图像处理算法T中的各参数均不需要调整,其通过采用传统图像处理算法即可实现。需要说明的是,本申请实施例中所述的图像处理算法T,其用于对样本图像数据E处理以生成可以在显示器中显示供标注人员标注的图像数据;本申请实施例中所述的图像处理算法,其用于对样本图像数据E进行处理以生成供图像检测模型进行图像检测的图像数据,其参数需要调整。In the embodiment of the present application, in order to facilitate the processing of the sample image data E by the image processing algorithm and the detection of the image by the image detection model, the sample image data E output by the camera 101 is a high bit depth (for example, 16bit, 20bit or 24bit) single-channel Linear RAW image data, the dynamic range of which is much larger than the dynamic range that can be displayed by the monitor. In addition, the sample image data E is a color filter array (CFA, color filter array) image, which does not have color information, so it is difficult for the annotator to extract the image from the camera. Each target object is identified in the sample image data E output by the device 101 . Based on this, in the embodiment of the present application, the parameter debugging device 401 also runs an image processing algorithm T, and the image processing algorithm T is used to process the sample image data E to generate a color that can be presented on a display and has suitable brightness and color. An image, such as an RGB image, is convenient for the annotator to annotate the target object presented in the sample image data E. The image processing flow performed by the image processing algorithm T may include, but is not limited to: system error correction, global tone mapping, demosaicing or white balance correction. Each parameter in the image processing algorithm T does not need to be adjusted, which can be achieved by using traditional image processing algorithms. It should be noted that the image processing algorithm T described in the embodiments of the present application is used to process the sample image data E to generate image data that can be displayed on the display for annotators to mark; The image processing algorithm is used to process the sample image data E to generate image data for image detection by the image detection model, and its parameters need to be adjusted.
基于图1所示的电子设备100的结构示意图、图3所述的车辆的应用场景以及图4所示的系统架构400,下面结合图5和图6,对用于图像处理的参数调试方法进行详细介绍。Based on the schematic structural diagram of the electronic device 100 shown in FIG. 1 , the application scenario of the vehicle shown in FIG. 3 , and the system architecture 400 shown in FIG. Details.
请参考图5,其示出了本申请实施例提供的图像处理算法的参数调试方法的流程500。需要说明的是,本申请实施例中所述的用于图像处理的参数调试方法的执行主体可以为图4所示的参数调试设备401。如图5所示,用于图像处理的参数调试方法包括如下步骤:Please refer to FIG. 5 , which shows a flow 500 of a method for debugging parameters of an image processing algorithm provided by an embodiment of the present application. It should be noted that the execution body of the parameter debugging method for image processing described in the embodiments of the present application may be the parameter debugging device 401 shown in FIG. 4 . As shown in Figure 5, the parameter debugging method for image processing includes the following steps:
步骤501,基于样本图像数据集S2,利用图像处理算法对样本图像数据集S2中的每一个样本图像数据E进行处理,生成多个图像数据F。 Step 501 , based on the sample image data set S2 , use an image processing algorithm to process each sample image data E in the sample image data set S2 to generate a plurality of image data F.
样本图像数据集S2中包括多个样本图像数据E和每一个样本图像数据E的标注信 息。其中,样本图像数据集S2中的每一个样本图像数据E均是由如图1所示的摄像装置101采集的。样本图像数据E的标注信息是基于图像检测模型所执行的检测内容标注的。例如,当图像检测模型用于执行目标检测时,样本图像数据E的标注信息可以包括目标对象和目标对象在第二样本图像中的位置;当图像检测模型用于执行行人意图检测时,样本图像数据E的标注信息可以包括目标对象和目标对象的动作信息。The sample image data set S2 includes a plurality of sample image data E and label information of each sample image data E. Wherein, each sample image data E in the sample image data set S2 is collected by the camera 101 as shown in FIG. 1 . The annotation information of the sample image data E is annotated based on the detection content performed by the image detection model. For example, when the image detection model is used to perform target detection, the annotation information of the sample image data E may include the target object and the position of the target object in the second sample image; when the image detection model is used to perform pedestrian intent detection, the sample image The annotation information of the data E may include the target object and the action information of the target object.
这里的图像处理算法用于执行一个或多个图像处理过程。该一个或多个图像处理过程包括但不限于:暗电流校正、响应非线性校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、降噪、对比度增强或者边缘增强等。需要说明的是,该一个或多个图像处理过程通常是顺序执行的。本申请实施例对图像处理过程的执行顺序不作具体限定。The image processing algorithms herein are used to perform one or more image processing procedures. The one or more image processing procedures include, but are not limited to, dark current correction, response nonlinearity correction, lens shading correction, demosaicing, white balance correction, tone mapping, noise reduction, contrast enhancement or edge enhancement, and the like. It should be noted that the one or more image processing processes are usually performed sequentially. This embodiment of the present application does not specifically limit the execution order of the image processing process.
步骤502,利用图像检测模型对图像数据F进行检测,生成检测结果。In step 502, the image data F is detected by using the image detection model, and a detection result is generated.
这里,图像检测模型可以执行以下至少一项检测:目标检测、车道线检测或者行人意图检测等。图像检测模型是基于图像数据集S1对深度神经网络训练得到的。需要说明的是,图像数据集S1中的图像数据D是不同于摄像装置101的其他摄像装置采集的。此外,图像检测模型的训练方法为传统技术,在此不再赘述。Here, the image detection model may perform at least one of the following detections: object detection, lane line detection, or pedestrian intent detection, and the like. The image detection model is obtained by training a deep neural network based on the image dataset S1. It should be noted that the image data D in the image data set S1 is collected by other imaging devices different from the imaging device 101 . In addition, the training method of the image detection model is a traditional technology, and details are not described here.
步骤503,基于检测结果和样本图像数据E的标注信息,调整图像处理算法的参数。 Step 503 , based on the detection result and the label information of the sample image data E, adjust the parameters of the image processing algorithm.
在一种可能的实现方式中,可以采用机器学习的方法调整图像处理算法的参数。下面对第二种可能的实现方式进行详细介绍。In a possible implementation manner, the parameters of the image processing algorithm may be adjusted by using a machine learning method. The second possible implementation manner is described in detail below.
基于样本图像数据集S2中每一个样本图像数据E的检测结果和样本图像数据E的标注信息之间的误差,构建损失函数。该损失函数可以包括但不限于:交叉熵函数等。然后,利用反向传播算法和梯度下降算法调整图像处理算法中用于执行一个或多个图像处理流程的图像处理模块的参数。其中,梯度下降算法具体可以包括但不限于:SGD、Adam等最优化算法。在基于预设损失函数进行反向传播时,可利用链式法则计算预设损失函数关于图像处理算法中各参数的梯度。A loss function is constructed based on the error between the detection result of each sample image data E in the sample image data set S2 and the label information of the sample image data E. The loss function may include, but is not limited to, a cross-entropy function and the like. Then, the parameters of the image processing module for executing one or more image processing procedures in the image processing algorithm are adjusted by using the back-propagation algorithm and the gradient descent algorithm. The gradient descent algorithm may specifically include, but is not limited to, optimization algorithms such as SGD and Adam. When performing backpropagation based on the preset loss function, the chain rule can be used to calculate the gradient of the preset loss function with respect to each parameter in the image processing algorithm.
具体来说,用于执行各图像处理流程的图像处理算法之间互相独立。当图像处理算法所执行的图像处理流程依次为暗电流校正、响应非线性校正、镜头阴影校正、解马赛克、白平衡校正、降噪、对比度增强或者边缘增强等流程时,采用反向传播算法对各流程的图像处理算法的参数进行调整时,最先传播并调整的参数为用于执行边缘增强的图像处理算法的参数,然后依次调整的图像处理算法的参数为用于执行对比度增强的图像处理算法的参数、用于执行降噪的图像处理算法的参数、用于执行白平衡校正的图像处理算法的参数等。可以理解的是,本申请实施例可以包括更多或更少的图像处理流程,相应可以包括更多或更少所要调整的参数。此外,由于本申请实施例对图像处理算法所执行的图像处理流程的顺序不作限定,因此,采用反向传播算法对图像处理算法的各参数进行调整时,最先调整的参数与最后调整的参数本申请实施例不做具体限定。Specifically, the image processing algorithms used to execute each image processing flow are independent of each other. When the image processing flow performed by the image processing algorithm is dark current correction, response nonlinear correction, lens shading correction, demosaicing, white balance correction, noise reduction, contrast enhancement or edge enhancement, etc. When adjusting the parameters of the image processing algorithms in each process, the parameters that are propagated and adjusted first are the parameters of the image processing algorithm for performing edge enhancement, and then the parameters of the image processing algorithms that are adjusted sequentially are the image processing parameters for performing contrast enhancement. Parameters of an algorithm, parameters of an image processing algorithm for performing noise reduction, parameters of an image processing algorithm for performing white balance correction, and the like. It can be understood that the embodiments of the present application may include more or less image processing processes, and accordingly may include more or less parameters to be adjusted. In addition, since the embodiment of the present application does not limit the order of the image processing flow performed by the image processing algorithm, when using the backpropagation algorithm to adjust the parameters of the image processing algorithm, the first adjusted parameter and the last adjusted parameter The embodiments of the present application do not make specific limitations.
需要说明的是,基于预设损失函数采用反向传播算法调整图像处理算法的参数时,保持图像检测模型中的各参数均不变。It should be noted that when the parameters of the image processing algorithm are adjusted by the back-propagation algorithm based on the preset loss function, the parameters in the image detection model are kept unchanged.
步骤504,确定预设损失函数的损失值是否小于等于预设阈值。如果预设损失函数的损失值小于等于预设阈值,保存图像处理算法的参数;如果预设损失函数的损失值大于预设阈值,执行步骤505。Step 504: Determine whether the loss value of the preset loss function is less than or equal to a preset threshold. If the loss value of the preset loss function is less than or equal to the preset threshold, the parameters of the image processing algorithm are saved; if the loss value of the preset loss function is greater than the preset threshold, step 505 is executed.
步骤505,确定迭代调整图像处理算法的参数的次数是否大于等于预设阈值。如果迭代调整图像处理算法的参数的次数大于等于预设阈值,则保存图像处理算法的参数,如果迭代调整图像处理算法的参数的次数小于预设阈值,则继续执行步骤501-步骤505。Step 505: Determine whether the number of times of iteratively adjusting the parameters of the image processing algorithm is greater than or equal to a preset threshold. If the number of times of iteratively adjusting the parameters of the image processing algorithm is greater than or equal to the preset threshold, the parameters of the image processing algorithm are saved, and if the number of times of iteratively adjusting the parameters of the image processing algorithm is less than the preset threshold, continue to execute steps 501-505.
在本申请一种可能的实现方式中,上述一个或多个图像处理流程可以全部由传统图像处理算法实现。当上述一个或多个图像处理流程由传统图像处理算法实现时,假设该一个或多个图像处理流程包括暗电流校正、响应非线性校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、降噪、对比度增强和边缘增强。此时,所要调整的图像处理算法的参数可以包括单不限于:用于执行镜头阴影校正的图像处理算法的参数、用于执行白平衡校正的图像处理算法的参数、用于执行阶调映射的图像处理算法的参数、用于执行对比度增强的图像处理算法的参数、用于执行边缘增强的图像处理算法的参数以及用于执行降噪的图像处理算法的参数。需要说明的是,当采用机器学习的方法调整图像处理算法的参数时,本申请实施例所述的图像检测算法均具有可微分性,以便于基于链式法则反向传播时可导。下面对所要调整的各图像处理算法的参数进行详细说明。In a possible implementation manner of the present application, the above-mentioned one or more image processing procedures may all be implemented by traditional image processing algorithms. When the above one or more image processing procedures are implemented by traditional image processing algorithms, it is assumed that the one or more image processing procedures include dark current correction, response nonlinearity correction, lens shading correction, demosaicing, white balance correction, tone mapping , Noise Reduction, Contrast Enhancement and Edge Enhancement. At this time, the parameters of the image processing algorithm to be adjusted may include, but are not limited to: parameters of an image processing algorithm for performing lens shading correction, parameters for an image processing algorithm for performing white balance correction, parameters for performing tone mapping Parameters of an image processing algorithm, parameters of an image processing algorithm for performing contrast enhancement, parameters for an image processing algorithm for performing edge enhancement, and parameters for an image processing algorithm for performing noise reduction. It should be noted that when a machine learning method is used to adjust the parameters of the image processing algorithm, the image detection algorithms described in the embodiments of the present application are all differentiable, so that they can be derived during backpropagation based on the chain rule. The parameters of each image processing algorithm to be adjusted are described in detail below.
镜头阴影校正用于修正图像边缘区域因为主光线入射夹角增大而产生的照度衰减,其使用多项式对照度衰减曲面进行拟合,其中多项式的自变量为图像各个像素与摄像装置光心的距离。因此,用于执行镜头阴影校正的图像处理算法中,所要调整的参数为图像各个像素与摄像装置光心的距离,也即多项式中的自变量的值。Lens shading correction is used to correct the illuminance attenuation caused by the increase of the incident angle of the chief ray in the edge area of the image. It uses a polynomial to fit the illuminance attenuation surface, where the independent variable of the polynomial is the distance between each pixel of the image and the optical center of the camera device . Therefore, in the image processing algorithm for performing lens shading correction, the parameter to be adjusted is the distance between each pixel of the image and the optical center of the camera, that is, the value of the independent variable in the polynomial.
白平衡校正的执行流程为:首先使用中性色像素搜索算法对图像中的中性色区域进行筛选,基于筛选结果,确定中性色区域在图像中的边界坐标。然后,使用图像的亮度通道对所筛选出的中性色区域中的像素值进行加权,生成二值化的中性色像素掩模(neutral pixel mask)。接着,利用该中性色像素掩模对各个(近)中性色像素进行加权平均,获得图像中光源颜色的估计。最后,通过计算光源颜色RGB通道之间的比值,得到该图像对应的白平衡校正系数,将白平衡校正系数应用于原始图像上,即得到白平衡校正后的图像。因此,用于执行白平衡校正的图像处理算法中,所要调整的参数为中性色区域在图像中的边界坐标。The execution process of white balance correction is as follows: first, the neutral color pixel search algorithm is used to screen the neutral color area in the image, and based on the screening result, the boundary coordinates of the neutral color area in the image are determined. Then, the pixel values in the filtered neutral regions are weighted using the luminance channel of the image to generate a binarized neutral pixel mask. The individual (near) neutral pixels are then weighted averaged using this neutral pixel mask to obtain an estimate of the color of the light source in the image. Finally, by calculating the ratio between the RGB channels of the light source color, the white balance correction coefficient corresponding to the image is obtained, and the white balance correction coefficient is applied to the original image, that is, the white balance corrected image is obtained. Therefore, in the image processing algorithm for performing white balance correction, the parameter to be adjusted is the boundary coordinates of the neutral color area in the image.
阶调映射用于接收高位深的线性图像,将该线性图像转换为非线性图像,同时完成图像位深的压缩,输出8比特图像。当使用全局gamma函数作为阶调映射函数时,可训练参数为γ参数;当使用对数变换算法对线性图像进行动态范围压缩时,可训练参数为对数变换的底数;当使用更加复杂的阶调映射模型,例如基于人眼动态范围响应的retinex模型时,可训练参数为其中的目标亮度参数(key)、目标饱和度参数(saturation)以及用于生成低通滤波图像的滤波核参数。Tone mapping is used to receive a linear image with high bit depth, convert the linear image into a nonlinear image, and complete the compression of the image bit depth, and output an 8-bit image. When using the global gamma function as the tone mapping function, the trainable parameter is the γ parameter; when using the logarithmic transformation algorithm to compress the dynamic range of the linear image, the trainable parameter is the base of the logarithmic transformation; when using a more complex order When a tone mapping model, such as a retinax model based on the dynamic range response of the human eye, can be trained as a target luminance parameter (key), a target saturation parameter (saturation), and a filter kernel parameter for generating a low-pass filtered image.
对比度增强用于增强图像的对比度。具体的,可以使用CLAHE(contrast limited adaptive histogram equalization,对比度限制的自适应直方图均衡化)算法对图像的对比度进行局部调节。CLAHE算法含有两个可调节参数:对比度阈值参数以及用于直方图统计的子图像块大小。本申请实施例一种可能的实现方式中,可以固定子图像块的大小,仅调整对比度阈值参数。更进一步的,可以将子图像块的大小固定为输入图像的大小。Contrast Enhancement is used to enhance the contrast of an image. Specifically, the CLAHE (contrast limited adaptive histogram equalization, contrast limited adaptive histogram equalization) algorithm can be used to locally adjust the contrast of the image. The CLAHE algorithm contains two adjustable parameters: the contrast threshold parameter and the sub-patch size used for histogram statistics. In a possible implementation manner of the embodiment of the present application, the size of the sub-image block may be fixed, and only the contrast threshold parameter may be adjusted. Further, the size of the sub-image block can be fixed to the size of the input image.
图像的边缘增强中,首先对接收到的图像中的Y通道图像进行高斯滤波,得到低通Y通道图像Y L;将原始的Y通道图像与低通Y通道图像Y L之间的差值图像作为图像中的高频信号,即Y HF=Y-Y L,该高频信号通常对应于图像中的边缘区域;通过对高频信号的 强度进行放大并叠加至低通Y通道图像Y L中,可以得到边缘增强后的图像Y E,即Y E=Y L+α·(Y-Y L),其中α为边缘增强因子,图像边缘增强程度随α增大而增大。用于执行边缘增强的图像处理算法的参数中,所要调整的参数为边缘增强因子α。 In the edge enhancement of the image, the Y-channel image in the received image is first subjected to Gaussian filtering to obtain a low-pass Y-channel image Y L ; the difference image between the original Y-channel image and the low-pass Y-channel image Y L As the high-frequency signal in the image, that is, Y HF =YY L , the high-frequency signal usually corresponds to the edge area in the image; by amplifying the intensity of the high-frequency signal and superimposing it into the low-pass Y-channel image Y L , it can be The image Y E after edge enhancement is obtained, that is, Y E =Y L +α·(YY L ), where α is an edge enhancement factor, and the degree of image edge enhancement increases with the increase of α. Among the parameters of the image processing algorithm for performing edge enhancement, the parameter to be adjusted is the edge enhancement factor α.
图像降噪中,通常采用双边滤波(bilateral filter)降噪算法。双边滤波降噪算法中,可训练的参数可以包括:用于控制降噪强度与空间距离之间关系的空间域高斯核参数σs,以及用于控制降噪强度与响应值差异之间关系的像素值域高斯核参数σr。In image noise reduction, the bilateral filter noise reduction algorithm is usually used. In the bilateral filtering noise reduction algorithm, the trainable parameters can include: the spatial Gaussian kernel parameter σs used to control the relationship between the noise reduction intensity and the spatial distance, and the pixel used to control the relationship between the noise reduction intensity and the response value difference Value domain Gaussian kernel parameter σr.
在另外一种可能的实现方式中,上述一个或多个图像处理流程中的部分图像处理流程可以由图像处理模型实现。例如,上述解马赛克、白平衡校正、阶调映射、降噪、对比度增强和边缘增强等图像处理过程由图像处理模型实现。该图像处理模型可以包括多种,其中每一种图像处理模型用于执行某种特定的图像处理操作。例如,噪声消除的图像处理模型用于执行噪声消除的图像处理操作,去马赛克的图像处理模型用于执行去马赛克的图像处理操作。每一种图像处理模型均可以是采用传统神经网络训练方法,利用训练样本对多层神经网络训练得到,本申请实施例对训练方法不再赘述。每一种图像处理模型中的每一层的工作可以用数学表达式y=a(W*x十b)来描述。其中,W为权重,x为输入向量(即输入神经元),b为偏置数据,y为输出向量(即输出神经元),a为常数。从物理层面深度神经网络中的每一层的工作可以理解为通过五种对输入空间(输入向量的集合)的操作,完成输入空间到输出空间的变换(即矩阵的行空间到列空间),这五种操作包括:1、升维/降维;2、放大/缩小;3、旋转;4、平移;5、"弯曲"。其中1、2、3的操作由W*x完成,4的操作由+b完成,5的操作则由a()来实现。这里之所以用"空间"二字来表述是因为被处理的图像并不是单个事物,而是一类事物,空间是指这类事物所有个体的集合。其中,W是权重向量,该向量中的每一个值表示该层神经网络中的一个神经元的权重值。该向量W决定着上文所述的输入空间到输出空间的空间变换,即每一层的权重W控制着如何变换空间。当采用一种或多种图像处理模型对图像数据进行处理时,所调整的图像处理算法,即为形成图像处理模型的所有层的权重矩阵(由很多层的向量W形成的权重矩阵)。需要说明的是,上述一种或多种图像处理模型是在离线端预先训练完成的。当采用本申请所述的参数调试方法对图像处理算法的参数进行调整时,仅需要对用于形成图像处理模型的神经网络的参数进行微调,以使得图像处理模型对图像处理后得到图像的风格特征,与用于训练图像检测模型的样本图像数据的风格特征相似。从而提高图像检测模型的检测精度。In another possible implementation manner, part of the image processing flow in the above one or more image processing flows may be implemented by an image processing model. For example, the above-mentioned image processing processes such as demosaicing, white balance correction, tone mapping, noise reduction, contrast enhancement and edge enhancement are implemented by an image processing model. The image processing model may include a variety of image processing models, each of which is used to perform a specific image processing operation. For example, a noise-cancelling image processing model is used to perform a noise-cancelling image processing operation, and a demosaicing image processing model is used to perform a demosaicing image processing operation. Each image processing model may be obtained by using a traditional neural network training method, and using training samples to train a multi-layer neural network, and the training method will not be repeated in this embodiment of the present application. The work of each layer in each image processing model can be described by the mathematical expression y=a(W*x+b). Among them, W is the weight, x is the input vector (ie the input neuron), b is the bias data, y is the output vector (ie the output neuron), and a is a constant. From the physical level, the work of each layer in the deep neural network can be understood as completing the transformation from the input space to the output space (that is, the row space of the matrix to the column space) through five operations on the input space (set of input vectors). These five operations include: 1. Dimension up/down; 2. Zoom in/out; 3. Rotate; 4. Translation; 5. "Bend". Among them, the operations of 1, 2, and 3 are completed by W*x, the operation of 4 is completed by +b, and the operation of 5 is implemented by a(). The reason why the word "space" is used here is because the image to be processed is not a single thing, but a type of thing, and space refers to the collection of all individuals of this type of thing. Among them, W is the weight vector, and each value in the vector represents the weight value of a neuron in the neural network of this layer. This vector W determines the space transformation from the input space to the output space described above, that is, the weight W of each layer controls how the space is transformed. When one or more image processing models are used to process image data, the adjusted image processing algorithm is the weight matrix (the weight matrix formed by the vectors W of many layers) of all layers forming the image processing model. It should be noted that the above one or more image processing models are pre-trained at the offline end. When the parameters of the image processing algorithm are adjusted using the parameter debugging method described in this application, it is only necessary to fine-tune the parameters of the neural network used to form the image processing model, so that the image processing model can obtain the style of the image after processing the image. features, similar to the style features of the sample image data used to train the image detection model. Thereby, the detection accuracy of the image detection model is improved.
基于图5所示的图像处理算法的参数调试方法,下面以图像检测模型所执行的检测功能为目标对象识别为例,对本申请实施例所述的图像处理算法的参数调试方法进行更为详细的说明。请参考图6,图6为本申请实施例所述的用于图像处理的参数调试方法的一个具体应用示意图。其中,图6所述的用于图像处理的参数调试方法的执行主体可以为图4所示的参数调试设备401。Based on the parameter debugging method of the image processing algorithm shown in FIG. 5 , the following takes the detection function performed by the image detection model as the target object recognition as an example to describe the parameter debugging method of the image processing algorithm described in the embodiment of the present application in more detail. illustrate. Please refer to FIG. 6 , which is a schematic diagram of a specific application of the parameter debugging method for image processing according to an embodiment of the present application. The execution subject of the parameter debugging method for image processing described in FIG. 6 may be the parameter debugging device 401 shown in FIG. 4 .
步骤601,利用摄像装置101采集多个样本图像数据E。该多个样本图像数据E是高位深(例如16bit、20bit或者24bit)单通道线性RAW图像数据。 Step 601 , using the camera 101 to collect a plurality of sample image data E. The plurality of sample image data E are single-channel linear RAW image data of high bit depth (eg, 16 bits, 20 bits, or 24 bits).
步骤602,采用图像处理算法T对样本图像数据E处理,生成处理后的图像数据F以在显示屏幕中呈现。该图像数据F为彩色图像,比如RGB图像。 Step 602 , using the image processing algorithm T to process the sample image data E, to generate the processed image data F for presentation on the display screen. The image data F is a color image, such as an RGB image.
步骤603,对图像数据F进行人工标注,得到每一个样本图像数据E的标注信息。由于图像检测模型所执行的检测为目标检测,则样本图像数据的标注信息包括样本图像数据 所呈现的目标对象的类别以及在样本图像数据中的位置。Step 603: Manually label the image data F to obtain labeling information of each sample image data E. Since the detection performed by the image detection model is target detection, the annotation information of the sample image data includes the category of the target object presented by the sample image data and the position in the sample image data.
步骤604,采用图像处理算法对样本图像数据进行处理,生成图像数据D。 Step 604 , using an image processing algorithm to process the sample image data to generate image data D.
步骤605,将图像数据D输入至图像检测模型,得到图像检测结果,其中,该图像检测结果包括预设目标对象在样本图像数据中的位置区域以及为预设目标对象的概率值。Step 605: Input the image data D into the image detection model to obtain an image detection result, wherein the image detection result includes the position area of the preset target object in the sample image data and the probability value of the preset target object.
步骤606,基于图像检测结果与样本图像数据E的标注信息,构建损失函数。Step 606: Construct a loss function based on the image detection result and the labeling information of the sample image data E.
假设用N(·)表示图像检测模型,因此有
Figure PCTCN2021078478-appb-000001
其中
Figure PCTCN2021078478-appb-000002
表示图像检测模型的损失函数。Y out表示图像检测模型输入的图像数据,也即是图像处理算法通过执行多个图像处理流程最终输出的图像数据。
Suppose N(·) is used to represent the image detection model, so we have
Figure PCTCN2021078478-appb-000001
in
Figure PCTCN2021078478-appb-000002
Represents the loss function of the image detection model. Y out represents the image data input by the image detection model, that is, the image data finally output by the image processing algorithm by executing multiple image processing procedures.
步骤607,确定损失函数的损失值是否达到预设阈值。如果未达到预设阈值,执行步骤508,如果达到预设阈值,保存图像处理算法的参数。Step 607: Determine whether the loss value of the loss function reaches a preset threshold. If the preset threshold is not reached, step 508 is performed, and if the preset threshold is reached, the parameters of the image processing algorithm are saved.
步骤608,采用反向传播算法和梯度下降算法,调整图像检测算法的参数。 Step 608, using the back-propagation algorithm and the gradient descent algorithm to adjust the parameters of the image detection algorithm.
下面以调整用于执行边缘增强流程的图像处理算法的参数为例,进行更为详细的说明。图像的边缘增强是将图像中的高频信号进行放大并叠加至低通图像中,以实现图像的边缘锐化,即:YE=YL+α·(Y-YL)。该公式中各参数的含义详见上述图像的边缘增强中的相关描述,在此不再赘述。其中,用于执行边缘增强的图像处理算法的参数中,所要调整的参数为边缘增强因子α。假设用P(·)表示图像边缘增强之后的所有图像操作,即Y out=P(Y E)。边缘增强之前所有的图像操作后得到的图像为Y,在P(·)中所有参数保持不变的情况下,根据链式法则可求得目标函数L关于边缘增强因子α的梯度: In the following, a more detailed description is given by taking as an example adjusting the parameters of the image processing algorithm for executing the edge enhancement process. The edge enhancement of the image is to amplify the high frequency signal in the image and superimpose it into the low-pass image to realize the edge sharpening of the image, namely: YE=YL+α·(Y-YL). For the meaning of each parameter in the formula, please refer to the relevant description in the above-mentioned edge enhancement of the image, which will not be repeated here. Among the parameters of the image processing algorithm for performing edge enhancement, the parameter to be adjusted is the edge enhancement factor α. Assume that all image operations after image edge enhancement are denoted by P(·), ie Y out =P(Y E ). The image obtained after all the image operations before edge enhancement is Y. Under the condition that all parameters in P( ) remain unchanged, the gradient of the objective function L with respect to the edge enhancement factor α can be obtained according to the chain rule:
Figure PCTCN2021078478-appb-000003
Figure PCTCN2021078478-appb-000003
根据SGD算法,经过一次迭代后,α更新后的值α (t+1),为: According to the SGD algorithm, after one iteration, the updated value of α (t+1) is:
Figure PCTCN2021078478-appb-000004
Figure PCTCN2021078478-appb-000004
其中,α (t+1)为α当前值α (t)沿其梯度的相反方向步进一段距离之后的值,δ表示学习率,用于控制每次迭代的步长。将公式(2)代入公式(1)中,即可得到边缘增强因子α更新后的值: Among them, α (t+1) is the value of the current value of α (t) after stepping a distance in the opposite direction of its gradient, and δ represents the learning rate, which is used to control the step size of each iteration. Substitute formula (2) into formula (1) to obtain the updated value of edge enhancement factor α:
Figure PCTCN2021078478-appb-000005
Figure PCTCN2021078478-appb-000005
假设边缘增强因子α的当前值为α (0),则经过一次迭代后边缘增强因子α的值为: Assuming that the current value of the edge enhancement factor α is α (0) , the value of the edge enhancement factor α after one iteration is:
Figure PCTCN2021078478-appb-000006
Figure PCTCN2021078478-appb-000006
同理,用于执行其他图像处理流程的图像检测算法的参数的调整过程可以与执行边缘增强的图像检测算法的参数的调整过程相类似,在此不再赘述。Similarly, the adjustment process of the parameters of the image detection algorithm for executing other image processing procedures may be similar to the adjustment process of the parameters of the image detection algorithm for edge enhancement, and details are not repeated here.
如图6所示的参数调试方法,通过重复执行步骤604-步骤608,也即对图像处理算法中的参数进行多次迭代调整,可以得到图像处理算法中令损失函数L达到极小值的最佳参数值。In the parameter debugging method shown in FIG. 6 , by repeatedly executing steps 604 to 608 , that is, performing multiple iterative adjustments to the parameters in the image processing algorithm, the minimum value in the image processing algorithm that makes the loss function L reach a minimum value can be obtained. best parameter value.
基于图1所示的电子设备100、图5和图6所示的用于图像处理的参数调试方法,本申请实施例还提供一种图像检测方法。请参考图7,其示出了本申请实施例提供的图像检测方法的流程700。其中,如图7所述的图像检测方法的执行主体可以是图1所述的ISP处理器和AI处理器。如图7所示,图像检测方法包括如下步骤:Based on the electronic device 100 shown in FIG. 1 and the parameter debugging method for image processing shown in FIG. 5 and FIG. 6 , an embodiment of the present application further provides an image detection method. Please refer to FIG. 7 , which shows a process 700 of an image detection method provided by an embodiment of the present application. Wherein, the execution subject of the image detection method described in FIG. 7 may be the ISP processor and the AI processor described in FIG. 1 . As shown in Figure 7, the image detection method includes the following steps:
步骤701,通过摄像装置101采集待检测的图像数据。 Step 701 , the image data to be detected is collected by the camera 101 .
步骤702,利用图像处理算法对待检测的图像数据进行处理,生成处理后的图像。 Step 702 , using an image processing algorithm to process the image data to be detected to generate a processed image.
步骤703,将处理后的图像输入至图像检测模型,得到检测结果。Step 703: Input the processed image into the image detection model to obtain a detection result.
其中,步骤702所述的图像处理算法的参数,可以是采用如图5所述的用于图像处理的参数调试方法进行调试后得到的。The parameters of the image processing algorithm described in step 702 may be obtained after debugging using the parameter debugging method for image processing as described in FIG. 5 .
在一种可能的实现方式中,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。In a possible implementation manner, the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects.
可以理解的是,电子装置为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。It can be understood that, in order to realize the above-mentioned functions, the electronic device includes corresponding hardware and/or software modules for executing each function. The present application can be implemented in hardware or in the form of a combination of hardware and computer software in conjunction with the algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functionality for each particular application in conjunction with the embodiments, but such implementations should not be considered beyond the scope of this application.
本实施例可以根据上述方法示例对以上一个或多个处理器进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In this embodiment, the above one or more processors may be divided into functional modules according to the foregoing method examples. For example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. . The above-mentioned integrated modules can be implemented in the form of hardware. It should be noted that, the division of modules in this embodiment is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
在采用对应各个功能划分各个功能模块的情况下,图8示出了上述实施例中涉及的图像检测装置800的一种可能的组成示意图,如图8所示,该图像检测装置800可以包括:采集模块801、处理模块802和检测模块803。采集模块801,被配置成通过第一摄像装置采集待检测的图像数据;处理模块802,被配置成利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像;检测模块803,被配置成将所述处理后的图像输入至图像检测模型,得到检测结果;其中,所述图像处理算法的参数是通过比较所述第一摄像装置所采集的第一样本图像数据的标注信息和所述图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的。In the case where each functional module is divided according to each function, FIG. 8 shows a possible schematic diagram of the composition of the image detection apparatus 800 involved in the above embodiment. As shown in FIG. 8 , the image detection apparatus 800 may include: Collection module 801 , processing module 802 and detection module 803 . The acquisition module 801 is configured to collect image data to be detected through the first camera device; the processing module 802 is configured to process the image data to be detected by using an image processing algorithm to generate a processed image; the detection module 803 , which is configured to input the processed image into an image detection model to obtain a detection result; wherein, the parameters of the image processing algorithm are the annotations of the first sample image data collected by comparing the first camera device The information and the detection result of the image detection model on the first sample image data are obtained by adjusting based on the comparison result.
在一种可能的实现方式中,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。In a possible implementation manner, the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
在一种可能的实现方式中,所述图像处理算法的参数是通过参数调整模块确定的,所述参数调整模块(图中未示出)包括:比较子模块(图中未示出),被配置成将所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息进行比较以得到所述比较结果;调整子模块(图中未示出),被配置成基于所述比较结果迭代调整所述图像处理算法的参数;保存子模块(图中未示出),被配置成在预设条件满足时,保存所述图像处理算法的参数。In a possible implementation manner, the parameters of the image processing algorithm are determined by a parameter adjustment module, and the parameter adjustment module (not shown in the figure) includes: a comparison sub-module (not shown in the figure), which is is configured to compare the detection result of the first sample image data with the annotation information of the first sample image data to obtain the comparison result; the adjustment sub-module (not shown in the figure) is configured to be based on The comparison result iteratively adjusts the parameters of the image processing algorithm; the saving sub-module (not shown in the figure) is configured to save the parameters of the image processing algorithm when a preset condition is satisfied.
在一种可能的实现方式中,所述比较结果为误差,所述调整子模块(图中未示出)进一步被配置成:基于所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括所述图像处理算法中待更新的参数;基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代更新所述图像处理算法的参数。In a possible implementation manner, the comparison result is an error, and the adjustment sub-module (not shown in the figure) is further configured to: based on the detection result of the first sample image data and the first The error between the annotation information of the sample image data is used to construct a target loss function, wherein the target loss function includes the parameters to be updated in the image processing algorithm; based on the target loss function, using the back propagation algorithm and A gradient descent algorithm that iteratively updates the parameters of the image processing algorithm.
在一种可能的实现方式中,所述图像处理算法包括以下至少一项:暗电流校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、对比度增强、图像边缘增强和图像降噪。In a possible implementation manner, the image processing algorithm includes at least one of the following: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, image edge enhancement, and image noise reduction.
在一种可能的实现方式中,所述图像处理算法的参数包括以下至少一项:镜头阴影校正算法中的图像各个像素与摄像装置光心距离;白平衡校正算法中的中性色区域在图像中的边界坐标;阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;对比度增强算法中的对比度阈值;图像边缘增强算法中的边缘增强因子;以及图像降噪算法中的空间域高斯参数以及像素值域高斯参数。In a possible implementation manner, the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera device; the neutral color area in the white balance correction algorithm is in the image the boundary coordinates in the tone mapping algorithm; the target brightness, target saturation, and filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge enhancement factor in the image edge enhancement algorithm; and the image The spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the noise reduction algorithm.
在一种可能的实现方式中,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数包括:用于生成所述图像处理模型的神经网络的权重系数。In a possible implementation manner, the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: weight coefficients of the neural network used to generate the image processing model.
在一种可能的实现方式中,所述第一样本图像数据的标注信息是人工标注的;以及所述装置还包括:转换模块(图中未示出),被配置成将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。In a possible implementation manner, the annotation information of the first sample image data is manually annotated; and the apparatus further includes: a conversion module (not shown in the figure) configured to convert the first sample image data into The sample image data is converted into color images suitable for human annotation.
在一种可能的实现方式中,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。In a possible implementation manner, the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects.
本实施例提供的图像检测装置800,用于执行电子设备100所执行的图像检测方法,可以达到与上述实现方法相同的效果。以上图8对应的各个模块可以软件、硬件或二者结合实现、例如,每个模块可以以软件形式实现,以驱动如图1所示的电子设备100中的ISP102和AI处理器103。或者,每个模块可包括对应处理器和相应的驱动软件两部分。The image detection apparatus 800 provided in this embodiment is configured to execute the image detection method executed by the electronic device 100, and can achieve the same effect as the above-mentioned implementation method. The modules corresponding to FIG. 8 can be implemented in software, hardware or a combination of the two. For example, each module can be implemented in software to drive the ISP 102 and the AI processor 103 in the electronic device 100 shown in FIG. 1 . Alternatively, each module may include a corresponding processor and a corresponding driver software.
在采用对应各个功能划分各个功能模块的情况下,图9示出了上述实施例中涉及的用于图像处理的参数调试装置900的一种可能的组成示意图,如图9所示,该用于图像处理的参数调试装置900可以包括:处理模块901、检测模块902、比较模块903和调整模块904。处理模块901,被配置成利用图像处理算法对第一样本图像数据进行图像处理,生成第一图像数据,其中,所述第一样本图像数据是通过第一摄像装置采集的;检测模块902,被配置成将所述第一图像数据输入至预先训练的图像检测模型,得到检测结果;比较模块903,被配置成比较所述检测结果和所述第一样本图像数据的标注信息之间的误差,得到比较结果;调整模块904,被配置成基于所述比较结果,调整所述图像处理算法的参数。In the case where each functional module is divided according to each function, FIG. 9 shows a possible schematic diagram of the composition of the parameter debugging apparatus 900 for image processing involved in the above embodiment. As shown in FIG. The image processing parameter debugging apparatus 900 may include: a processing module 901 , a detection module 902 , a comparison module 903 and an adjustment module 904 . The processing module 901 is configured to perform image processing on the first sample image data by using an image processing algorithm to generate first image data, wherein the first sample image data is collected by a first camera; the detection module 902 , is configured to input the first image data into a pre-trained image detection model to obtain a detection result; a comparison module 903 is configured to compare the detection result with the annotation information of the first sample image data The error is obtained, and a comparison result is obtained; the adjustment module 904 is configured to adjust the parameters of the image processing algorithm based on the comparison result.
在一种可能的实现方式中,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。In a possible implementation manner, the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
在一种可能的实现方式中,所述比较结果为误差,所述调整模块被配置成:基于所述检测结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括所述图像处理算法中待调整的参数;基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代调整所述图像处理算法的参数。In a possible implementation manner, the comparison result is an error, and the adjustment module is configured to: construct a target based on the error between the detection result and the annotation information of the first sample image data A loss function, wherein the target loss function includes parameters to be adjusted in the image processing algorithm; based on the target loss function, the parameters of the image processing algorithm are iteratively adjusted by using a backpropagation algorithm and a gradient descent algorithm.
在一种可能的实现方式中,所述图像处理算法包括以下至少一项:暗电流校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、对比度增强、图像边缘增强或者图像降噪。In a possible implementation manner, the image processing algorithm includes at least one of the following: dark current correction, lens shading correction, demosaicing, white balance correction, tone mapping, contrast enhancement, image edge enhancement, or image noise reduction.
在一种可能的实现方式中,所述图像处理算法的参数包括以下至少一项:镜头阴影校正算法中的图像各个像素与摄像装置光心距离;白平衡校正算法中的中性色区域在图像中的边界坐标;阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;对比度增强算法中的对比度阈值;图像边缘增强算法中的边缘增强因子;以及图像降噪算法中的空间域高斯参数以及像素值域高斯参数。In a possible implementation manner, the parameters of the image processing algorithm include at least one of the following: the distance between each pixel of the image in the lens shading correction algorithm and the optical center of the camera device; the neutral color area in the white balance correction algorithm is in the image the boundary coordinates in the tone mapping algorithm; the target brightness, target saturation, and filter kernel parameters used to generate the low-pass filtered image in the tone mapping algorithm; the contrast threshold in the contrast enhancement algorithm; the edge enhancement factor in the image edge enhancement algorithm; and the image The spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the noise reduction algorithm.
在一种可能的实现方式中,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数包括:用于生成所述图像处理模型的神经网络的权重系数。In a possible implementation manner, the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include: weight coefficients of the neural network used to generate the image processing model.
在一种可能的实现方式中,所述第一样本图像数据的标注信息是人工标注的;以及所述装置还包括:转换模块(图中未示出),被配置成将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。In a possible implementation manner, the annotation information of the first sample image data is manually annotated; and the apparatus further includes: a conversion module (not shown in the figure) configured to convert the first sample image data into The sample image data is converted into color images suitable for human annotation.
在一种可能的实现方式中,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。In a possible implementation manner, the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, and prediction of motion trajectories of target objects.
在采用集成的单元的情况下,图像检测装置800可以包括至少一个处理器和存储器。其中,至少一个处理器可以调用存储器存储的全部或部分计算机程序,对电子设备100的动作进行控制管理,例如,可以用于支持电子设备100执行上述各个模块执行的步骤。存储器可以用于支持电子设备100执行存储程序代码和数据等。处理器可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑模块,其可以是实现计算功能的一个或多个微处理器组合,例如包括但不限于图1所示的图像信号处理器101和AI处理器103。此外,该微处理器组合还可以包括中央处理器和控制器等。此外,处理器除了包括图1所示的各处理器外,还可以包括其他可编程逻辑器件、晶体管逻辑器件、或者分立硬件组件等。存储器可以包括随机存取存储器(RAM)和只读存储器(ROM)等。该随机存取存储器可以包括易失性存储器(如SRAM、DRAM、DDR(双倍数据速率SDRAM,Double Data Rate SDRAM)或SDRAM等)和非易失性存储器。RAM中可以存储有ISP102和AI处理器103运行所需要的数据(诸如图像处理算法等)和参数、ISP102和AI处理器103运行所产生的中间数据、ISP102处理后的图像数据、AI处理器103运行后的输出结果等。只读存储器ROM中可以存储有ISP102和AI处理器103的可执行程序。上述各部件可以通过加载可执行程序以执行各自的工作。存储器存储的可执行程序可以执行如图7所述的图像检测方法。Where an integrated unit is employed, the image detection apparatus 800 may include at least one processor and memory. Wherein, at least one processor can call all or part of the computer program stored in the memory to control and manage the actions of the electronic device 100, for example, it can be used to support the electronic device 100 to perform the steps performed by the above-mentioned modules. The memory may be used to support the execution of the electronic device 100 by storing program codes and data, and the like. The processor may implement or execute various exemplary logic modules described in conjunction with the present disclosure, which may be a combination of one or more microprocessors that implement computing functions, such as, but not limited to, the image signal shown in FIG. 1 . Processor 101 and AI Processor 103. In addition, the microprocessor combination may also include a central processing unit, a controller, and the like. In addition, the processor may include other programmable logic devices, transistor logic devices, or discrete hardware components in addition to the processors shown in FIG. 1 . The memory may include random access memory (RAM) and read only memory (ROM), among others. The random access memory may include volatile memory (such as SRAM, DRAM, DDR (Double Data Rate SDRAM, Double Data Rate SDRAM) or SDRAM, etc.) and non-volatile memory. The RAM can store data (such as image processing algorithms, etc.) and parameters required for the operation of the ISP102 and the AI processor 103, intermediate data generated by the operation of the ISP102 and the AI processor 103, image data processed by the ISP102, and the AI processor 103. The output after running, etc. Executable programs of the ISP 102 and the AI processor 103 may be stored in the read-only memory ROM. Each of the above components can perform their own work by loading an executable program. The executable program stored in the memory can execute the image detection method as described in FIG. 7 .
在采用集成的单元的情况下,图像检测装置900可以包括至少一个处理器和存储设备。其中,至少一个处理器可以调用存储器存储的全部或部分计算机程序,对如图4所示的参数调试设备401的动作进行控制管理,例如,可以用于支持参数调试设备401执行上述各个模块执行的步骤。存储器可以用于支持参数调试设备401执行存储程序代码和数据等。处理器可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑模块,其可以是实现计算功能的一个或多个微处理器组合,例如包括但不限于中央处理器和控制器等。此外,处理器还可以包括其他可编程逻辑器件、晶体管逻辑器件、或者分立硬件组件等。存储器可以包括随机存取存储器(RAM)和只读存储器ROM等。该随机存取存储器可以包括易失性存储器(如SRAM、DRAM、DDR(双倍数据速率SDRAM,Double Data Rate SDRAM)或SDRAM等)和非易失性存储器。RAM中可以存储有参数调试设备401运行所需要的数据(诸如图像处理算法等)和参数、参数调试设备401运行所产生的中间数据、参数调试设备401运行后的输出结果等。只读存储器ROM中可以存储有参数调试设备401的可执行程序。上述各部件可以通过加载可执行程序以执行各自的工作。存储器存储的可执行程序可以执行如图5或图6所述的参数调节方法。In the case of an integrated unit, the image detection apparatus 900 may include at least one processor and storage device. Wherein, at least one processor can call all or part of the computer program stored in the memory to control and manage the actions of the parameter debugging device 401 as shown in FIG. step. The memory may be used to support the execution of the parameter debugging device 401 to store program codes and data, and the like. The processor can implement or execute various exemplary logic modules described in conjunction with the disclosure of the present application, which can be one or more microprocessor combinations that implement computing functions, including but not limited to a central processing unit and a controller, etc. . In addition, the processor may also include other programmable logic devices, transistor logic devices, or discrete hardware components, or the like. The memory may include random access memory (RAM), read only memory ROM, and the like. The random access memory can include volatile memory (such as SRAM, DRAM, DDR (Double Data Rate SDRAM, Double Data Rate SDRAM) or SDRAM, etc.) and non-volatile memory. The RAM may store data (such as image processing algorithms) and parameters required for the operation of the parameter debugging device 401, intermediate data generated by the parameter debugging device 401, and output results after the parameter debugging device 401 runs. An executable program of the parameter debugging device 401 may be stored in the read-only memory ROM. Each of the above components can perform their own work by loading an executable program. The executable program stored in the memory can execute the parameter adjustment method described in FIG. 5 or FIG. 6 .
本实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在计算机上运行时,使得计算机执行上述相关方法步骤实现上述实施例中的图像检测装置800的图像检测方法,或者实现上述实施例中的参数调节装置900的参数调节方法。This embodiment further provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on the computer, the computer executes the above-mentioned related method steps to realize the image detection in the above-mentioned embodiment. The image detection method of the apparatus 800, or the parameter adjustment method of the parameter adjustment apparatus 900 in the above-mentioned embodiment is implemented.
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中图像检测装置800的图像检测方法,或者实现上述实施例中的参数调节装置900的参数调节方法。This embodiment also provides a computer program product, when the computer program product is run on a computer, it causes the computer to execute the above-mentioned relevant steps, so as to realize the image detection method of the image detection apparatus 800 in the above-mentioned embodiment, or to realize the above-mentioned embodiment. The parameter adjustment method of the parameter adjustment device 900 in FIG.
其中,本实施例提供的计算机可读存储介质或者计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Wherein, the computer-readable storage medium or computer program product provided in this embodiment is used to execute the corresponding method provided above. Therefore, for the beneficial effect that can be achieved, reference may be made to the corresponding method provided above. The beneficial effects will not be repeated here.
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。From the description of the above embodiments, those skilled in the art can understand that for the convenience and brevity of the description, only the division of the above functional modules is used as an example for illustration. In practical applications, the above functions can be allocated by different The function module is completed, that is, the internal structure of the device is divided into different function modules, so as to complete all or part of the functions described above.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的可读存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (which may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods in the various embodiments of the present application. The aforementioned readable storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk, etc. that can store program codes. medium.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited to this. should be covered within the scope of protection of this application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application. scope.

Claims (22)

  1. 一种图像检测方法,其特征在于,包括:An image detection method, comprising:
    通过第一摄像装置采集待检测的图像数据;Collect image data to be detected by the first camera device;
    利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像,其中,所述图像处理算法的参数是通过比较所述第一摄像装置所采集的第一样本图像数据的标注信息和图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的;The image data to be detected is processed by using an image processing algorithm to generate a processed image, wherein the parameters of the image processing algorithm are the labels of the first sample image data collected by comparing the first camera device The information and the image detection model are obtained by adjusting the detection results of the first sample image data and based on the comparison results;
    将所述处理后的图像输入至所述图像检测模型,得到检测结果。The processed image is input into the image detection model to obtain a detection result.
  2. 根据权利要求1所述的图像检测方法,其特征在于,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。The image detection method according to claim 1, wherein the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  3. 根据权利要求1或2所述的图像检测方法,其特征在于,所述图像处理算法的参数通过如下步骤确定:The image detection method according to claim 1 or 2, wherein the parameters of the image processing algorithm are determined by the following steps:
    将所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息进行比较以得到所述比较结果;comparing the detection result of the first sample image data with the annotation information of the first sample image data to obtain the comparison result;
    基于所述比较结果迭代调整所述图像处理算法的参数;Iteratively adjust parameters of the image processing algorithm based on the comparison results;
    在预设条件满足时,保存所述图像处理算法的参数。When the preset condition is satisfied, the parameters of the image processing algorithm are saved.
  4. 根据权利要求3所述的图像检测方法,其特征在于,所述比较结果为误差,所述基于所述比较结果迭代调整所述图像处理算法的参数,包括:The image detection method according to claim 3, wherein the comparison result is an error, and the iteratively adjusting the parameters of the image processing algorithm based on the comparison result comprises:
    基于所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括所述图像处理算法中待调整的参数;Based on the error between the detection result of the first sample image data and the annotation information of the first sample image data, a target loss function is constructed, wherein the target loss function includes the target loss function to be used in the image processing algorithm. Adjusted parameters;
    基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代调整所述图像处理算法的参数。Based on the objective loss function, the parameters of the image processing algorithm are iteratively adjusted using a back-propagation algorithm and a gradient descent algorithm.
  5. 根据权利要求1-4任一项所述的图像检测方法,其特征在于,所述图像处理算法包括以下至少一项:暗电流校正算法、镜头阴影校正算法、解马赛克算法、白平衡校正算法、阶调映射算法、对比度增强算法、图像边缘增强算法和图像降噪算法。The image detection method according to any one of claims 1-4, wherein the image processing algorithm comprises at least one of the following: dark current correction algorithm, lens shading correction algorithm, demosaicing algorithm, white balance correction algorithm, Tone mapping algorithm, contrast enhancement algorithm, image edge enhancement algorithm and image noise reduction algorithm.
  6. 根据权利要求5所述的图像检测方法,其特征在于,所述图像处理算法的参数包括以下至少一项:The image detection method according to claim 5, wherein the parameters of the image processing algorithm include at least one of the following:
    镜头阴影校正算法中的图像各个像素与摄像装置光心距离;The distance between each pixel of the image and the optical center of the camera in the lens shading correction algorithm;
    白平衡校正算法中的中性色区域在图像中的边界坐标;The boundary coordinates of the neutral color area in the image in the white balance correction algorithm;
    阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;Target brightness, target saturation and filter kernel parameters used to generate low-pass filtered images in the tone mapping algorithm;
    对比度增强算法中的对比度阈值;Contrast threshold in contrast enhancement algorithm;
    图像边缘增强算法中的边缘增强因子;以及the edge enhancement factor in the image edge enhancement algorithm; and
    图像降噪算法中的空间域高斯参数以及像素值域高斯参数。The spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
  7. 根据权利要求5所述的图像检测方法,其特征在于,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数包括:The image detection method according to claim 5, wherein the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include:
    用于生成所述图像处理模型的神经网络的权重系数。Weight coefficients for the neural network used to generate the image processing model.
  8. 根据权利要求1-7任一项所述的图像检测方法,其特征在于,所述第一样本图像数 据的标注信息是人工标注的;以及The image detection method according to any one of claims 1-7, wherein the annotation information of the first sample image data is manually annotated; and
    所述方法还包括:The method also includes:
    将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。Converting the first sample image data into a color image suitable for manual annotation.
  9. 根据权利要求1-8任一项所述的图像检测方法,其特征在于,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。The image detection method according to any one of claims 1-8, wherein the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence, Prediction of the motion trajectory of the target object.
  10. 一种图像检测装置,其特征在于,包括:An image detection device, characterized in that it includes:
    采集模块,被配置成通过第一摄像装置采集待检测的图像数据;an acquisition module, configured to acquire image data to be detected through the first camera device;
    处理模块,被配置成利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像,其中,所述图像处理算法的参数是通过比较所述第一摄像装置所采集的第一样本图像数据的标注信息和图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的;a processing module configured to process the image data to be detected by using an image processing algorithm to generate a processed image, wherein the parameters of the image processing algorithm are obtained by comparing the first image data collected by the first camera device The annotation information of the sample image data and the detection result of the image detection model on the first sample image data are obtained by adjusting based on the comparison result;
    检测模块,被配置成将所述处理后的图像输入至所述图像检测模型,得到检测结果。The detection module is configured to input the processed image to the image detection model to obtain a detection result.
  11. 根据权利要求10所述的图像检测装置,其特征在于,所述图像检测模型是通过对第二摄像装置所采集的第二样本图像数据进行神经网络训练得到的。The image detection device according to claim 10, wherein the image detection model is obtained by performing neural network training on the second sample image data collected by the second camera device.
  12. 根据权利要求10或11所述的图像检测装置,其特征在于,所述图像处理算法的参数是通过参数调整模块确定的,所述参数调整模块包括:The image detection device according to claim 10 or 11, wherein the parameters of the image processing algorithm are determined by a parameter adjustment module, and the parameter adjustment module comprises:
    比较子模块,被配置成将所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息进行比较以得到所述比较结果;a comparison sub-module configured to compare the detection result of the first sample image data with the annotation information of the first sample image data to obtain the comparison result;
    调整子模块,被配置成基于所述比较结果迭代调整所述图像处理算法的参数;an adjustment submodule configured to iteratively adjust parameters of the image processing algorithm based on the comparison result;
    保存子模块,被配置成在预设条件满足时,保存所述图像处理算法的参数。The saving sub-module is configured to save the parameters of the image processing algorithm when the preset condition is satisfied.
  13. 根据权利要求12所述的图像检测装置,其特征在于,所述比较结果为误差,所述调整子模块进一步被配置成:The image detection device according to claim 12, wherein the comparison result is an error, and the adjustment sub-module is further configured to:
    基于所述第一样本图像数据的检测结果和所述第一样本图像数据的标注信息之间的所述误差,构建目标损失函数,其中所述目标损失函数包括所述图像处理算法中待更新的参数;Based on the error between the detection result of the first sample image data and the annotation information of the first sample image data, a target loss function is constructed, wherein the target loss function includes the target loss function to be used in the image processing algorithm. updated parameters;
    基于所述目标损失函数,利用反向传播算法和梯度下降算法,迭代更新所述图像处理算法的参数。Based on the objective loss function, the parameters of the image processing algorithm are iteratively updated using a back-propagation algorithm and a gradient descent algorithm.
  14. 根据权利要求10-13任一项所述的图像检测装置,其特征在于,所述图像处理算法包括以下至少一个图像处理流程:暗电流校正、镜头阴影校正、解马赛克、白平衡校正、阶调映射、对比度增强、图像边缘增强和图像降噪。The image detection device according to any one of claims 10-13, wherein the image processing algorithm includes at least one of the following image processing procedures: dark current correction, lens shading correction, demosaicing, white balance correction, tone Mapping, contrast enhancement, image edge enhancement and image noise reduction.
  15. 根据权利要求14所述的图像检测装置,其特征在于,所述图像处理算法的参数包括以下至少一项:The image detection device according to claim 14, wherein the parameters of the image processing algorithm include at least one of the following:
    镜头阴影校正算法中的图像各个像素与摄像装置光心距离;The distance between each pixel of the image and the optical center of the camera in the lens shading correction algorithm;
    白平衡校正算法中的中性色区域在图像中的边界坐标;The boundary coordinates of the neutral color area in the image in the white balance correction algorithm;
    阶调映射算法中的目标亮度、目标饱和度以及用于生成低通滤波图像的滤波核参数;Target brightness, target saturation and filter kernel parameters used to generate low-pass filtered images in the tone mapping algorithm;
    对比度增强算法中的对比度阈值;Contrast threshold in contrast enhancement algorithm;
    图像边缘增强算法中的边缘增强因子;以及the edge enhancement factor in the image edge enhancement algorithm; and
    图像降噪算法中的空间域高斯参数以及像素值域高斯参数。The spatial domain Gaussian parameter and the pixel value domain Gaussian parameter in the image noise reduction algorithm.
  16. 根据权利要求14所述的图像检测装置,其特征在于,所述图像处理算法是通过训练后的图像处理模型执行;所述图像处理算法的参数包括:The image detection device according to claim 14, wherein the image processing algorithm is executed by a trained image processing model; the parameters of the image processing algorithm include:
    用于生成所述图像处理模型的神经网络的权重系数。Weight coefficients for the neural network used to generate the image processing model.
  17. 根据权利要求10-16任一项所述的图像检测装置,其特征在于,所述第一样本图像数据的标注信息是人工标注的;以及The image detection apparatus according to any one of claims 10-16, wherein the labeling information of the first sample image data is manually labelled; and
    所述装置还包括:The device also includes:
    转换模块,被配置成将所述第一样本图像数据转换成适用于进行人工标注的彩色图像。A conversion module configured to convert the first sample image data into a color image suitable for manual annotation.
  18. 根据权利要求10-17任一项所述的图像检测装置,其特征在于,所述图像检测模型用于执行以下至少一项检测任务:检测框的标注、目标对象的识别、置信度的预测、目标对象运动轨迹的预测。The image detection device according to any one of claims 10-17, wherein the image detection model is used to perform at least one of the following detection tasks: labeling of detection frames, recognition of target objects, prediction of confidence levels, Prediction of the motion trajectory of the target object.
  19. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    第一摄像装置,用于采集待检测的图像数据;a first camera device for collecting image data to be detected;
    图像信号处理器,用于利用图像处理算法对所述待检测的图像数据进行处理,生成处理后的图像;an image signal processor, configured to process the image data to be detected by using an image processing algorithm to generate a processed image;
    人工智能处理器,用于将所述处理后的图像输入至图像检测模型,得到检测结果;an artificial intelligence processor for inputting the processed image into an image detection model to obtain a detection result;
    其中,所述图像处理算法的参数是通过比较所述第一摄像装置所采集的第一样本图像数据的标注信息和所述图像检测模型对所述第一样本图像数据的检测结果、并且基于比较结果进行调整得到的。The parameter of the image processing algorithm is a result of comparing the annotation information of the first sample image data collected by the first camera device with the detection result of the first sample image data by the image detection model, and Adjusted based on comparison results.
  20. 一种图像检测装置,其特征在于,包括:An image detection device, characterized in that it includes:
    一个或多个处理器和存储器;one or more processors and memories;
    所述存储器耦合至所述处理器,所述存储器用于存储一个或多个程序;the memory is coupled to the processor, the memory for storing one or more programs;
    所述一个或多个处理器用于运行所述一个或多个程序,以实现如权利要求1-9任一项所述的方法。The one or more processors are configured to run the one or more programs to implement the method of any one of claims 1-9.
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,该计算机程序被至少一个处理器执行时用于实现如权利要求1-9任一项所述的方法。A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by at least one processor, is used to implement the method according to any one of claims 1-9. method.
  22. 一种计算机程序产品,其特征在于,当所述计算机程序产品被至少一个处理器执行时用于实现如权利要求1-9任一项所述的方法。A computer program product, characterized in that, when the computer program product is executed by at least one processor, it is used to implement the method according to any one of claims 1-9.
PCT/CN2021/078478 2021-03-01 2021-03-01 Image detection method, apparatus, and electronic device WO2022183321A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/078478 WO2022183321A1 (en) 2021-03-01 2021-03-01 Image detection method, apparatus, and electronic device
CN202180093086.5A CN116888621A (en) 2021-03-01 2021-03-01 Image detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/078478 WO2022183321A1 (en) 2021-03-01 2021-03-01 Image detection method, apparatus, and electronic device

Publications (1)

Publication Number Publication Date
WO2022183321A1 true WO2022183321A1 (en) 2022-09-09

Family

ID=83153711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078478 WO2022183321A1 (en) 2021-03-01 2021-03-01 Image detection method, apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN116888621A (en)
WO (1) WO2022183321A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178544A1 (en) * 2012-06-15 2015-06-25 Seref Sagiroglu System for estimating gender from fingerprints
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN109961443A (en) * 2019-03-25 2019-07-02 北京理工大学 Liver neoplasm dividing method and device based on the guidance of more phase CT images
CN110458095A (en) * 2019-08-09 2019-11-15 厦门瑞为信息技术有限公司 A kind of recognition methods, control method, device and the electronic equipment of effective gesture
CN112101328A (en) * 2020-11-19 2020-12-18 四川新网银行股份有限公司 Method for identifying and processing label noise in deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178544A1 (en) * 2012-06-15 2015-06-25 Seref Sagiroglu System for estimating gender from fingerprints
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN109961443A (en) * 2019-03-25 2019-07-02 北京理工大学 Liver neoplasm dividing method and device based on the guidance of more phase CT images
CN110458095A (en) * 2019-08-09 2019-11-15 厦门瑞为信息技术有限公司 A kind of recognition methods, control method, device and the electronic equipment of effective gesture
CN112101328A (en) * 2020-11-19 2020-12-18 四川新网银行股份有限公司 Method for identifying and processing label noise in deep learning

Also Published As

Publication number Publication date
CN116888621A (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US11882357B2 (en) Image display method and device
US20220188999A1 (en) Image enhancement method and apparatus
US20200234414A1 (en) Systems and methods for transforming raw sensor data captured in low-light conditions to well-exposed images using neural network architectures
CN108780508B (en) System and method for normalizing images
CN110276767A (en) Image processing method and device, electronic equipment, computer readable storage medium
WO2020001196A1 (en) Image processing method, electronic device, and computer readable storage medium
WO2020177607A1 (en) Image denoising method and apparatus
US11776129B2 (en) Semantic refinement of image regions
CN113065645B (en) Twin attention network, image processing method and device
US11967040B2 (en) Information processing apparatus, control method thereof, imaging device, and storage medium
CN112581379A (en) Image enhancement method and device
CN113379609B (en) Image processing method, storage medium and terminal equipment
CN115239581A (en) Image processing method and related device
US20220070369A1 (en) Camera Image Or Video Processing Pipelines With Neural Embedding
US20230222639A1 (en) Data processing method, system, and apparatus
WO2022193132A1 (en) Image detection method and apparatus, and electronic device
WO2022183321A1 (en) Image detection method, apparatus, and electronic device
US20230132230A1 (en) Efficient Video Execution Method and System
US20230239553A1 (en) Multi-sensor imaging color correction
CN112991236B (en) Image enhancement method and device based on template
CN113781375A (en) Vehicle-mounted vision enhancement method based on multi-exposure fusion
EP4354384A1 (en) Image processing method and apparatus, and vehicle
Wei et al. Enhanced Object Detection by Integrating Camera Parameters into Raw Image-Based Faster R-CNN
WO2022041506A1 (en) Image depth estimation method, terminal device, and computer readable storage medium
US20230281839A1 (en) Image alignment with selective local refinement resolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21928412

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180093086.5

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21928412

Country of ref document: EP

Kind code of ref document: A1