CN115115546A - Image processing method, system, electronic equipment and readable storage medium - Google Patents

Image processing method, system, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115115546A
CN115115546A CN202210752477.XA CN202210752477A CN115115546A CN 115115546 A CN115115546 A CN 115115546A CN 202210752477 A CN202210752477 A CN 202210752477A CN 115115546 A CN115115546 A CN 115115546A
Authority
CN
China
Prior art keywords
image
target image
license plate
target
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210752477.XA
Other languages
Chinese (zh)
Inventor
谢磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210752477.XA priority Critical patent/CN115115546A/en
Publication of CN115115546A publication Critical patent/CN115115546A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method, an image processing system, electronic equipment and a readable storage medium, wherein a target image is obtained firstly, then a pixel point is selected from the target image as an image central point, all pixel points which are positioned in a preset area with the image central point are selected from the target image, and the selected pixel points are combined with the image central point to generate a filtering template; and finally, calculating the average pixel value of the filtering template, and replacing the pixel value at the central point of the image by using the calculated average pixel value so as to filter the target image. The target image comprises a face image and/or a license plate image. Therefore, the method adopts the mean filtering algorithm as the image fuzzy processing method, not only has simple operation, but also has high speed, so that a large amount of computing resources can be saved when massive data are processed; therefore, the problems of large calculated amount and long operation time when the fuzzy processing is carried out on the face image and/or the license plate image in the prior art are solved.

Description

Image processing method, system, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to an image processing method, system, electronic device, and readable storage medium.
Background
Autonomous vehicles need to rely on large-scale data collection, including personal and technical information. Data compliance issues need to be considered while collecting and uploading data. Data desensitization is a technology for privacy removal of data, when vehicle-side data is uploaded to the cloud, face and license plate information in an image is detected and recognized, and the information is fuzzified, so that the purpose of protecting personal information is achieved. However, in the prior art, in terms of image blurring processing, an original image is firstly reduced, and then a filter template and the reduced image are amplified through linear interpolation operation, so as to obtain a blurred image with the same size as the original image, but this method has a large amount of calculation and a long operation time, and for a large amount of internet of vehicles data, this process requires a large amount of calculation resources. Meanwhile, most of the current face and license plate detection systems strictly restrict scenes during image acquisition, such as lighting, camera position, environment and other regulations, however, in reality, more application scenes cannot meet the harsh condition restrictions, so that the systems cannot be widely applied.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present application provides an image processing method, system, electronic device and readable storage medium to solve the above technical problems.
The application provides an image processing method, which comprises the following steps:
acquiring a target image shot in advance or in real time;
selecting a pixel point from the target image as an image center point, and selecting all pixel points in a preset area with the image center point from the target image;
combining the selected pixel points with the image center points to generate a filtering template;
and calculating the average pixel value of the filtering template, and replacing the pixel value of the filtering template by using the calculated average pixel value so as to carry out filtering processing on the target image.
In an embodiment of the present application, after obtaining a target image photographed in advance or in real time, the method further includes:
acquiring multiple preset groups of scaling factors, and scaling the target image in an equal proportion by using the multiple groups of scaling factors to form an image pyramid;
scanning the image pyramid in the horizontal direction and the vertical direction by step length of equal scaling to obtain images of different layers in the image pyramid;
marking images of different layers by using a deep convolutional neural network, acquiring a face potential area in each image, and recording the coordinates and the size of the upper left corner of the face potential area as a response point;
and reversely mapping all the response points to the target image, fusing the overlapped potential human face areas, and performing human face detection on the target image according to a fusion result.
In an embodiment of the application, after acquiring a target image shot in advance or in real time, the method further includes:
carrying out graying processing on the target image to obtain a corresponding grayscale image;
carrying out Gaussian blur on the gray level image, and filtering noise of the gray level image to obtain a noise-removed gray level image;
calculating a derivative of the denoising grayscale image in a first-order horizontal direction, and searching a vertical edge of the denoising grayscale image according to a derivative calculation result;
acquiring a binary threshold value of the denoising gray level image by using a self-adaptive threshold value algorithm, and generating a binary picture;
closing the binary picture, removing blank spaces between every two vertical edge lines, and connecting regions of all vertical edges to obtain a license plate candidate region;
and distinguishing the license plate candidate regions based on the aspect ratio of the outline circumscribed rectangle and the areas of the license plate candidate regions so as to detect the license plate of the target image.
In an embodiment of the present application, the process of acquiring a target image captured in advance or in real time includes:
acquiring an image shooting device preset on a target vehicle, and acquiring a target image shot by the image shooting device in advance or in real time;
wherein the target vehicle comprises at least one of: full-automatic intelligent driving vehicles, semi-automatic intelligent driving vehicles and ordinary motor driving vehicles;
the image shooting device is arranged at the front end, the rear end and/or the side of the target vehicle in advance;
the target image comprises a face image and/or a license plate image.
In an embodiment of the present application, the method further includes: uploading the filtered target image to a preset storage area, wherein the preset storage area comprises: and (4) a cloud server.
The present application further provides an image processing system, comprising:
the image acquisition module is used for acquiring a target image shot in advance or in real time;
the pixel point processing module is used for selecting a pixel point from the target image as an image center point and selecting all pixel points which are in a preset area with the image center point from the target image;
the filtering module is used for combining the selected pixel points with the central point of the image to generate a filtering template; and calculating an average pixel value of the filtering template, and replacing the pixel value of the filtering template with the calculated average pixel value to perform filtering processing on the target image.
In an embodiment of the application, the system further includes a face detection module, where the face detection module is configured to perform face detection on the target image after the target image is acquired; the process of the face detection module for carrying out face detection on the target image comprises the following steps:
acquiring multiple preset groups of scaling factors, and scaling the target image in an equal proportion by using the multiple groups of scaling factors to form an image pyramid;
scanning the image pyramid in the horizontal direction and the vertical direction by step length of equal scaling to obtain images of different layers in the image pyramid;
marking images of different layers by using a deep convolutional neural network, acquiring a face potential area in each image, and recording the coordinates and the size of the upper left corner of the face potential area as a response point;
and reversely mapping all the response points to the target image, fusing the overlapped potential human face areas, and performing human face detection on the target image according to a fusion result.
In an embodiment of the present application, the system further includes a license plate detection module, where the license plate detection module is configured to perform license plate detection on the target image after acquiring the target image; the license plate detection module performs license plate detection on the target image, and the process comprises the following steps:
carrying out graying processing on the target image to obtain a corresponding grayscale image;
carrying out Gaussian blur on the gray level image, and filtering noise of the gray level image to obtain a noise-removed gray level image;
calculating a derivative of the denoising grayscale image in a first-order horizontal direction, and searching a vertical edge of the denoising grayscale image according to a derivative calculation result;
acquiring a binary threshold value of the denoising gray level image by using a self-adaptive threshold value algorithm, and generating a binary picture;
closing the binary picture, removing blank spaces between every two vertical edge lines, and connecting regions of all vertical edges to obtain a license plate candidate region;
and distinguishing the license plate candidate regions based on the aspect ratio of the outline circumscribed rectangle and the areas of the license plate candidate regions so as to detect the license plate of the target image.
The present application further provides an electronic device, the electronic device including:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement the image processing method as claimed in any one of the above.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the image processing method as defined in any one of the above.
As described above, the present application provides an image processing method, system, electronic device, and readable storage medium, which have the following advantages:
firstly, acquiring a target image shot in advance or in real time, then selecting a pixel point from the target image as an image central point, selecting all pixel points in a preset area with the image central point from the target image, and combining the selected pixel point with the image central point to generate a filtering template; and finally, calculating the average pixel value of the filtering template, and replacing the pixel value at the central point of the image by using the calculated average pixel value so as to filter the target image. The target image comprises a face image and/or a license plate image. Therefore, according to the method, a certain target pixel point in a target image is taken as an image center point, then n pixel points around the image center point are selected from the target image, meanwhile, the selected pixel points and the image center point are combined to form a filtering template together, then an average value is taken for pixel values in the filtering template, the calculated average pixel value is used for replacing the pixel value of the filtering template, and therefore filtering processing is conducted on the image. The method adopts the mean filtering algorithm as the image fuzzy processing method, not only has simple operation, but also has high speed, so that a large amount of computing resources can be saved when massive data are processed; therefore, the problems of large calculated amount and long operation time when the fuzzy processing is carried out on the face image and/or the license plate image in the prior art are solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic diagram of an exemplary system architecture to which one or more embodiments of the present application may be applied;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a face detection process according to an embodiment of the present application;
fig. 4 is a schematic flowchart of license plate detection according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of image processing according to another embodiment of the present application;
fig. 6 is a schematic hardware structure diagram of an image processing system according to an embodiment of the present application;
fig. 7 is a hardware configuration diagram of an electronic device suitable for implementing one or more embodiments of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the disclosure herein, wherein the embodiments of the present application will be described in detail with reference to the accompanying drawings and preferred embodiments. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It should be understood that the preferred embodiments are for purposes of illustration only and are not intended to limit the scope of the present disclosure.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present application, and the drawings only show the components related to the present application and are not drawn according to the number, shape and size of the components in actual implementation, and the type, number and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of the embodiments of the present application, however, it will be apparent to one skilled in the art that the embodiments of the present application may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring the embodiments of the present application.
Sobel filters, also known as Sobel operators, are used to compute differences in first, second, third, or mixed images. In general, the Sobel filter call calculates the first-order image difference in the horizontal or vertical direction using the following parameters (xor 1, 0, and appearance size 3) or (xor 0, 1, and appearance size 3).
Otsu, the law of law or the variance between the largest classes, is used to divide the original image into two images, foreground and background, according to a threshold. And (3) prospect: points, mass moments and average gray levels of the foreground under the current threshold are represented by n1, csum and m 1; background: the number of points, the mass moment and the average gray level of the background under the current threshold are represented by n2, sum-csum and m 2. When the optimal threshold is taken, the background should be the most different from the foreground, and in Otsu's algorithm this measure of difference is the maximum between-class variance.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which technical solutions in one or more embodiments of the present application may be applied. As shown in fig. 1, system architecture 100 may include a terminal device 110, a network 120, and a server 130. The terminal device 110 may include various electronic devices such as a smart phone, a tablet computer, a notebook computer, and a desktop computer. The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. Network 120 may be a communication medium of various connection types capable of providing a communication link between terminal device 110 and server 130, such as a wired communication link or a wireless communication link.
The system architecture in the embodiments of the present application may have any number of terminal devices, networks, and servers, according to implementation needs. For example, the server 130 may be a server group composed of a plurality of server devices. In addition, the technical solution provided in the embodiment of the present application may be applied to the terminal device 110, or may be applied to the server 130, or may be implemented by both the terminal device 110 and the server 130, which is not particularly limited in this application.
In an embodiment of the present application, the terminal device 110 or the server 130 of the present application may obtain a target image photographed in advance or in real time, select a pixel point from the target image as an image center point, select all pixel points in a preset region from the target image and the image center point, and combine the selected pixel point with the image center point to generate a filtering template; meanwhile, the average pixel value of the filtering template can be calculated, and the calculated average pixel value is used for replacing the pixel value of the filtering template, so that the filtering processing X of the target image is realized. The image processing method executed by the terminal device 110 or the server 130 may be implemented by taking a certain target pixel point in the target image as an image center point, then selecting n pixel points around the image center point from the target image, combining the selected pixel points with the image center point to form a filtering template, then taking an average value of pixel values in the filtering template, and replacing the pixel value of the filtering template with the calculated average pixel value, thereby performing filtering processing on the image. The image processing method executed by the terminal device 110 or the server 130 is not only simple in operation but also fast in speed, and can save a large amount of computing resources when processing massive data.
The above section describes the content of an exemplary system architecture to which the technical solution of the present application is applied, and the following continues to describe the image processing method of the present application.
Fig. 2 shows a flowchart of an image processing method according to an embodiment of the present application. Specifically, in an exemplary embodiment, as shown in fig. 2, the present embodiment provides an image processing method including the steps of:
s210, acquiring a target image shot in advance or in real time. As an example, the process of acquiring a target image photographed in advance or in real time includes: acquiring an image shooting device preset on a target vehicle, and acquiring a target image shot by the image shooting device in advance or in real time; the target vehicle in the present embodiment includes, but is not limited to: full-automatic intelligent driving vehicles, semi-automatic intelligent driving vehicles and ordinary motor driving vehicles; for example, a fully automated smart driving vehicle may be used as the target vehicle in the present embodiment. The image shooting device in the embodiment is arranged at the front end, the rear end and/or the side of the target vehicle in advance; for example, a drive recorder provided at the front end of the target vehicle for recording the travel data of the target vehicle may be used as the image capturing device in the present embodiment. In addition, the target image in the embodiment includes a face image and/or a license plate image.
S220, selecting a pixel point from the target image as an image center point, and selecting all pixel points in a preset area with the image center point from the target image. As an example, the preset region in this embodiment may be set according to an actual situation, for example, the preset region may be a region of n pixel points around the center point of the image, where n is a positive integer.
S230, combining the selected pixel points with the image center point to generate a filtering template;
and S240, calculating an average pixel value of the filtering template, and replacing the pixel value of the filtering template with the calculated average pixel value to filter the target image.
Therefore, in the embodiment, a certain target pixel point in the target image is taken as an image center point, then n pixel points around the image center point are selected from the target image, and the selected pixel points and the image center point are combined to form a filtering template, then an average value is taken for pixel values in the filtering template, and the calculated average pixel value is used for replacing the pixel value of the filtering template, so that the image is filtered. The mean filtering algorithm is adopted as the image fuzzy processing method, so that the operation is simple, the speed is high, and a large amount of computing resources can be saved when massive data are processed; therefore, the problems of large calculated amount and long operation time when the fuzzy processing is carried out on the face image and/or the license plate image in the prior art are solved.
According to the above-mentioned description, in an exemplary embodiment, after acquiring the target image photographed in advance or in real time, the method may further include: acquiring multiple preset groups of scaling factors, and scaling the target image in an equal proportion by using the multiple groups of scaling factors to form an image pyramid; scanning the image pyramid in the horizontal direction and the vertical direction by step length of equal scaling to obtain images of different layers in the image pyramid; marking images of different layers by using a deep convolutional neural network, acquiring a face potential area in each image, and recording the coordinates and the size of the upper left corner of the face potential area as a response point; and reversely mapping all the response points to the target image, fusing the overlapped potential human face areas, and performing human face detection on the target image according to a fusion result. Specifically, as shown in fig. 3, a plurality of different sets of scaling factors are first set, and an original input image is scaled in an equal ratio to generate an image pyramid. Based on sliding window technique, scanning in horizontal and vertical direction with step size of equal scaling, and scanning pyramid images of different layers to detect faces in different positions. And processing the image of each layer of the image pyramid by using the deep convolutional neural network obtained by training, marking a face potential area on each image, and recording the coordinates and the size of the upper left corner of each face area as a response point. After all the images in the image pyramid are processed, mapping all the response points to the original images in a reverse mode; and finally, fusing the overlapped human face potential regions to obtain a final human face detection result. Therefore, the present embodiment can perform face detection on the target image based on the deep convolutional neural network, so as to determine whether the target image includes a face. The method is equivalent to directly using the deep convolutional neural network technology to input the acquired image information, then performing equal-ratio scaling to generate an image pyramid, sequentially processing the images in the image pyramid, reversely mapping all the response point regions into the original input image, and finally fusing the face potential regions with the overlapped regions to obtain the final face detection result.
According to the above description, in an exemplary embodiment, after acquiring a target image photographed in advance or in real time, the method may further include: carrying out graying processing on the target image to obtain a corresponding grayscale image; carrying out Gaussian blur on the gray level image, and filtering noise of the gray level image to obtain a noise-removed gray level image; calculating a derivative of the denoising grayscale image in a first-order horizontal direction, and searching a vertical edge of the denoising grayscale image according to a derivative calculation result; acquiring a binarization threshold value of the denoising grayscale image by using a self-adaptive threshold value algorithm, and generating a binary image picture; closing the binary picture, removing blank spaces between every two vertical edge lines, and connecting regions of all vertical edges to obtain a license plate candidate region; and distinguishing the license plate candidate regions based on the aspect ratio of the outline circumscribed rectangle and the area of the license plate candidate regions so as to detect the license plate of the target image. Specifically, the license plate detection process is shown in fig. 4. In fig. 4, the color image is first converted into a grayscale image, and a 5 × 5 template is used to perform gaussian blur on the grayscale image, so as to filter noise from a camera or other environment, thereby obtaining a noise-removed grayscale image. The derivative of the noise-removed gray image in the first-order horizontal direction is calculated by using a Sobel filter, and the vertical edge of the noise-removed gray image is found based on the calculation result of the derivative. And obtaining a binarization threshold value of the denoising gray level image by using an Otsu adaptive threshold value algorithm, and thus obtaining a binary image picture. And (3) adopting a closing operation on the binary picture, removing blank spaces between every two vertical edge lines of the binary picture, and connecting all regions containing a large number of edges to obtain a candidate region of the license plate. And further distinguishing the candidate regions by using the aspect ratio and the region area of the outline circumscribed rectangle, and intercepting one image block only containing the license plate, thereby realizing license plate detection on the target vehicle.
In an exemplary embodiment, the target image after the filtering process may be uploaded to a preset storage area, where the preset storage area includes: and (4) a cloud server. After the image after the filtering processing is uploaded to the cloud server or the cloud end, when the filtered target image needs to be used in the later period, the image can be directly downloaded from the cloud end, so that the image is not required to be sent independently before a terminal, the image transmission cost is reduced, and meanwhile, the storage time of the image can be prolonged.
As shown in fig. 5, in an exemplary embodiment, the present embodiment further provides an image processing method, including the steps of:
and acquiring a target image shot in advance or in real time, wherein the target image comprises a face image and/or a license plate image.
Selecting a pixel point from the target image as a target pixel point, taking the target pixel as a center, and forming a filtering template by adding n pixels around the target pixel;
acquiring pixel values corresponding to the region where the filtering template is located, and averaging the pixel values to obtain a pixel average value;
and replacing the pixel value corresponding to the target pixel point in the original filtering template by the calculated pixel average value. When the range of n reaches a certain value, the image becomes blurred. And finally, uploading the fuzzy image after filtering processing to a cloud.
Specifically, assuming that the input image is P (x, y) and the output image of the filtering process is G (x, y), the relationship between the two can be expressed by the following equation:
G(x,y)=|P(x,y)-u(x,y)|
the u (x, y) is a pixel value in an adjacent range with the pixel point (x, y) as a center point and the size of (2n +1) × (2n + 1).
As can be seen from the description of the embodiment, in the embodiment, privacy images such as a face image and a license plate image that are collected can be subjected to blurring processing by using a mean filtering processing method. Corresponding to the embodiment, the mean filtering can be taken as a means and a method for processing the image blur. The average filtering processing has the advantages that: the principle is simple, the calculated amount is small, and the required time is short. In this embodiment, a CNN (convolutional neural network) is used to detect face and license plate information collected by a vehicle-mounted camera, and then on the premise of not invading personal privacy, high-efficiency mean value fuzzy processing is performed on the detected data, and the data is uploaded to a vehicle-mounted cloud.
In summary, the present application provides an image processing method, which includes obtaining a target image photographed in advance or in real time, selecting a pixel point from the target image as an image center point, selecting all pixel points in a preset area from the target image and the image center point, and combining the selected pixel point with the image center point to generate a filtering template; and finally, calculating the average pixel value of the filtering template, and replacing the pixel value at the central point of the image by using the calculated average pixel value so as to filter the target image. The target image comprises a face image and/or a license plate image. Therefore, the method comprises the steps of selecting n pixel points around the image central point from a target image by taking a certain target pixel point in the target image as the image central point, combining the selected pixel points with the image central point to form a filtering template, averaging pixel values in the filtering template, and replacing the pixel values of the filtering template by the calculated average pixel value, so as to filter the image. The method adopts the mean filtering algorithm as the image fuzzy processing method, not only has simple operation, but also has high speed, so when processing massive data, a large amount of computing resources can be saved; therefore, the problems of large calculated amount and long operation time when the fuzzy processing is carried out on the face image and/or the license plate image in the prior art are solved. In a complex traffic environment, data have factors such as motion blur, ornament shielding and environmental interference, the method is based on the characteristic self-extraction of the deep convolutional network, has strong robustness to scene and interference factors, can detect face and license plate information in different environments, and uploads the detected data to the cloud after mean value fuzzy processing, so that the method is relatively simple, a large amount of computing resources are saved, and the privacy safety problem of others is also guaranteed. In addition, the method uses the deep convolutional neural network to solve the problem of face and license plate detection, license plate images under different scenes can be collected firstly, and then the global and local invariant features of the license plate and face data under different scenes are extracted by utilizing the strong feature self-extraction capability of the deep neural network. Compared with the traditional method based on the visual artificial design characteristics, the method can accurately identify the face and the license plate information in different scenes. The method detects the face and license plate information collected by the vehicle-mounted camera by using the Convolutional Neural Network (CNN), can carry out high-efficiency mean value fuzzy processing on the detected data on the premise of not invading personal privacy, and can upload the image after fuzzy processing to the vehicle-mounted cloud.
As shown in fig. 6, the present application further provides an image processing system, including:
and the image acquisition module 610 is used for acquiring a target image which is shot in advance or in real time. As an example, the process of acquiring a target image photographed in advance or in real time includes: acquiring an image shooting device preset on a target vehicle, and acquiring a target image shot by the image shooting device in advance or in real time; the target vehicle in the present embodiment includes, but is not limited to: full-automatic intelligent driving vehicles, semi-automatic intelligent driving vehicles and ordinary motor driving vehicles; for example, a fully automated smart driving vehicle may be used as the target vehicle in the present embodiment. The image shooting device in the embodiment is arranged at the front end, the rear end and/or the side of the target vehicle in advance; for example, a drive recorder provided at the front end of the target vehicle for recording the travel data of the target vehicle may be used as the image capturing device in the present embodiment. In addition, the target image in the embodiment includes a face image and/or a license plate image.
A pixel point processing module 620, configured to select a pixel point from the target image as an image center point, and select all pixel points in a preset area from the target image, where the pixel points are located in the preset area; as an example, the preset region in this embodiment may be set according to an actual situation, for example, the preset region may be a region of n pixel points around the center point of the image, where n is a positive integer.
A filtering module 630, configured to combine the selected pixel point with the image center point to generate a filtering template; and calculating an average pixel value of the filtering template, and replacing the pixel value of the filtering template with the calculated average pixel value to perform filtering processing on the target image.
Therefore, in the embodiment, a certain target pixel point in the target image is taken as an image center point, then n pixel points around the image center point are selected from the target image, and the selected pixel points and the image center point are combined to form a filtering template, then an average value is taken for pixel values in the filtering template, and the calculated average pixel value is used for replacing the pixel value of the filtering template, so that the image is filtered. In the embodiment, the mean filtering algorithm is adopted as the image fuzzy processing method, so that the operation is simple, the speed is high, and a large amount of computing resources can be saved when massive data are processed; therefore, the problems of large calculated amount and long operation time when the fuzzy processing is carried out on the face image and/or the license plate image in the prior art are solved.
In an exemplary embodiment, the system further includes a face detection module, where the face detection module is configured to perform face detection on the target image after the target image is acquired. The process of the face detection module for carrying out face detection on the target image comprises the following steps: acquiring multiple preset groups of scaling factors, and carrying out equal-scale scaling on the target image by utilizing the multiple groups of scaling factors to form an image pyramid; scanning the image pyramid in the horizontal direction and the vertical direction by step length of equal scaling to obtain images of different layers in the image pyramid; marking images of different layers by using a deep convolutional neural network, acquiring a face potential area in each image, and recording the coordinates and the size of the upper left corner of the face potential area as a response point; and reversely mapping all the response points to the target image, fusing the overlapped potential human face areas, and performing human face detection on the target image according to a fusion result. Specifically, as shown in fig. 3, a plurality of different sets of scaling factors are first set, and an original input image is scaled in an equal ratio to generate an image pyramid. Based on sliding window technique, scanning in horizontal and vertical direction with step size of equal scaling, and scanning pyramid images of different layers to detect faces in different positions. And processing the image of each layer of the image pyramid by using the deep convolutional neural network obtained by training, marking a face potential area on each image, and recording the coordinates and the size of the upper left corner of each face area as a response point. After all the images in the image pyramid are processed, mapping all the response points to the original images in a reverse mode; and finally, fusing the overlapped human face potential regions to obtain a final human face detection result. Therefore, the present embodiment can perform face detection on the target image based on the deep convolutional neural network, so as to determine whether the target image includes a face. The method is equivalent to directly using the deep convolutional neural network technology to input the acquired image information, then performing equal-ratio scaling to generate an image pyramid, sequentially processing the images in the image pyramid, reversely mapping all the response point regions into the original input image, and finally fusing the face potential regions with the overlapped regions to obtain the final face detection result.
In an exemplary embodiment, the system further includes a license plate detection module, and the license plate detection module is configured to perform license plate detection on the target image after acquiring the target image. The license plate detection module performs license plate detection on the target image, and the process comprises the following steps: carrying out graying processing on the target image to obtain a corresponding grayscale image; carrying out Gaussian blur on the gray level image, and filtering noise of the gray level image to obtain a noise-removed gray level image; calculating a derivative of the denoising grayscale image in a first-order horizontal direction, and searching a vertical edge of the denoising grayscale image according to a derivative calculation result; acquiring a binary threshold value of the denoising gray level image by using a self-adaptive threshold value algorithm, and generating a binary picture; closing the binary picture, removing blank spaces between every two vertical edge lines, and connecting regions of all vertical edges to obtain a license plate candidate region; and distinguishing the license plate candidate regions based on the aspect ratio of the outline circumscribed rectangle and the area of the license plate candidate regions so as to detect the license plate of the target image. Specifically, the process of the license plate detection module for detecting the license plate is shown in fig. 4. In fig. 4, the color image is first converted into a grayscale image, and a 5 × 5 template is used to perform gaussian blur on the grayscale image, so as to filter noise from a camera or other environment, thereby obtaining a noise-removed grayscale image. The derivative of the noise-removed gray image in the first-order horizontal direction is calculated by using a Sobel filter, and the vertical edge of the noise-removed gray image is found based on the calculation result of the derivative. And obtaining a binarization threshold value of the de-noised gray level image by using an Otsu adaptive threshold value algorithm, and obtaining a binary image picture. And (3) adopting a closing operation on the binary picture, removing blank spaces between every two vertical edge lines of the binary picture, and connecting all regions containing a large number of edges to obtain a candidate region of the license plate. And further distinguishing the candidate regions by using the aspect ratio and the region area of the outline circumscribed rectangle, and intercepting one image block only containing the license plate, thereby realizing license plate detection on the target vehicle.
In an exemplary embodiment, the target image after the filtering process may be uploaded to a preset storage area, where the preset storage area includes: and (4) a cloud server. After the image after the filtering processing is uploaded to the cloud server or the cloud end, when the filtered target image needs to be used in the later period, the image can be directly downloaded from the cloud end, so that the image is not required to be sent independently before a terminal, the image transmission cost is reduced, and meanwhile, the storage time of the image can be prolonged.
In an exemplary embodiment, the present embodiment also provides an image processing system for performing the steps of:
and acquiring a target image shot in advance or in real time, wherein the target image comprises a face image and/or a license plate image.
Selecting a pixel point from the target image as a target pixel point, taking the target pixel as a center, and forming a filtering template by adding n pixels around the target pixel;
acquiring pixel values corresponding to the region where the filtering template is located, and averaging the pixel values to obtain a pixel average value;
and replacing the pixel value corresponding to the target pixel point in the original filtering template by the calculated pixel average value. When the range of n reaches a certain value, the image becomes blurred. And finally, uploading the fuzzy image after filtering processing to a cloud.
Specifically, assuming that the input image is P (x, y) and the output image of the filtering process is G (x, y), the relationship between the two can be expressed by the following equation:
G(x,y)=|P(x,y)-u(x,y)|
wherein, u (x, y) is a pixel value in an adjacent range with the pixel point (x, y) as a central point and the size of (2n +1) × (2n + 1).
As can be seen from the description of the embodiment, in the embodiment, privacy images such as a face image and a license plate image that are collected can be subjected to blurring processing by using a mean filtering processing method. The embodiment is equivalent to that the mean value filtering can be taken as a means and a method for processing the image blurring. The average filtering processing has the advantages that: the principle is simple, the calculated amount is small, and the required time is short. In this embodiment, a CNN (convolutional neural network) is used to detect face and license plate information collected by a vehicle-mounted camera, and then on the premise of not invading personal privacy, high-efficiency mean value fuzzy processing is performed on the detected data, and the data is uploaded to a vehicle-mounted cloud.
In summary, the present application provides an image processing system, which first obtains a target image photographed in advance or in real time, then selects a pixel point from the target image as an image center point, selects all pixel points in a preset area from the target image and the image center point, and combines the selected pixel point with the image center point to generate a filtering template; and finally, calculating the average pixel value of the filtering template, and replacing the pixel value at the central point of the image by using the calculated average pixel value so as to filter the target image. The target image comprises a face image and/or a license plate image. Therefore, the system selects n pixel points around the image central point from the target image by taking a certain target pixel point in the target image as the image central point, combines the selected pixel points with the image central point to jointly form a filtering template, then takes an average value of pixel values in the filtering template, and replaces the pixel value of the filtering template with the calculated average pixel value, thereby filtering the image. The system adopts the mean filtering algorithm as an image fuzzy processing method, not only has simple operation, but also has high speed, so when processing massive data, a large amount of computing resources can be saved; therefore, the problems of large calculated amount and long operation time when the fuzzy processing is carried out on the face image and/or the license plate image in the prior art are solved. In other words, under a complex traffic environment, data have factors such as motion blur, ornament shielding and environmental interference, the system is based on the characteristic self-extraction of the deep convolutional network, has strong robustness to scenes and interference factors, can detect face and license plate information under different environments, and uploads the detected data to the cloud after mean value fuzzy processing, so that a large amount of computing resources are saved and the privacy safety problem of other people is also guaranteed. In addition, the system uses the deep convolutional neural network to solve the problem of face and license plate detection, license plate images under different scenes can be collected firstly, and then the global and local invariant features of the license plate and face data under different scenes are extracted by utilizing the strong feature self-extraction capability of the deep neural network. Compared with the traditional method based on the visual artificial design characteristics, the system can accurately identify the face and the license plate information in different scenes. According to the method and the device, the face and the license plate information collected by the vehicle-mounted camera are detected by using the convolutional neural network CNN, efficient mean value fuzzy processing can be performed on the detected data on the premise of not invading personal privacy, and meanwhile, the image after the fuzzy processing can be uploaded to a vehicle-mounted cloud.
It should be noted that the image processing system provided in the foregoing embodiment and the image processing method provided in the foregoing embodiment belong to the same concept, and specific ways for the modules and units to perform operations have been described in detail in the method embodiments, and are not described herein again. In practical applications, the image processing system provided in the above embodiment may allocate the above functions to different functional modules according to needs, that is, the internal structure of the system is divided into different functional modules to complete all or part of the above described functions, which is not limited herein.
An embodiment of the present application further provides an electronic device, including: one or more processors; a storage device for storing one or more programs, which when executed by the one or more processors, cause the electronic device to implement the image processing method provided in the above-described embodiments.
FIG. 7 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes, such as executing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may comprise a propagated data signal with a computer-readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the image processing method as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment, or may exist separately without being incorporated in the electronic device.
Another aspect of the application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the image processing method provided in the above-described embodiments.
The above-described embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. An image processing method, characterized in that it comprises the steps of:
acquiring a target image shot in advance or in real time;
selecting a pixel point from the target image as an image center point, and selecting all pixel points in a preset area with the image center point from the target image;
combining the selected pixel points with the image center points to generate a filtering template;
and calculating the average pixel value of the filtering template, and replacing the pixel value of the filtering template with the calculated average pixel value so as to filter the target image.
2. The image processing method according to claim 1, wherein after acquiring the target image taken in advance or in real time, the method further comprises:
acquiring multiple preset groups of scaling factors, and scaling the target image in an equal proportion by using the multiple groups of scaling factors to form an image pyramid;
scanning the image pyramid in the horizontal direction and the vertical direction by step length of equal scaling to obtain images of different layers in the image pyramid;
marking images of different layers by using a deep convolutional neural network, acquiring a face potential area in each image, and recording the coordinates and the size of the upper left corner of the face potential area as a response point;
and reversely mapping all the response points to the target image, fusing the overlapped potential human face areas, and performing human face detection on the target image according to a fusion result.
3. The image processing method according to claim 1 or 2, wherein after acquiring the target image photographed in advance or in real time, the method further comprises:
carrying out graying processing on the target image to obtain a corresponding grayscale image;
carrying out Gaussian blur on the gray level image, and filtering noise of the gray level image to obtain a noise-removed gray level image;
calculating a derivative of the denoising grayscale image in a first-order horizontal direction, and searching a vertical edge of the denoising grayscale image according to a derivative calculation result;
acquiring a binary threshold value of the denoising gray level image by using a self-adaptive threshold value algorithm, and generating a binary picture;
closing the binary picture, removing blank spaces between every two vertical edge lines, and connecting regions of all vertical edges to obtain a license plate candidate region;
and distinguishing the license plate candidate regions based on the aspect ratio of the outline circumscribed rectangle and the area of the license plate candidate regions so as to detect the license plate of the target image.
4. The image processing method according to claim 1, wherein the process of acquiring the target image photographed in advance or in real time includes:
acquiring an image shooting device preset on a target vehicle, and acquiring a target image shot by the image shooting device in advance or in real time;
wherein the target vehicle comprises at least one of: full-automatic intelligent driving vehicles, semi-automatic intelligent driving vehicles and ordinary motor driving vehicles;
the image shooting device is arranged at the front end, the rear end and/or the side of the target vehicle in advance;
the target image comprises a face image and/or a license plate image.
5. The image processing method according to claim 1, characterized in that the method further comprises: uploading the filtered target image to a preset storage area, wherein the preset storage area comprises: and (4) a cloud server.
6. An image processing system, comprising:
the image acquisition module is used for acquiring a target image shot in advance or in real time;
the pixel point processing module is used for selecting a pixel point from the target image as an image center point and selecting all pixel points which are in a preset area with the image center point from the target image;
the filtering module is used for combining the selected pixel points with the image center point to generate a filtering template; and calculating an average pixel value of the filtering template, and replacing the pixel value of the filtering template with the calculated average pixel value to perform filtering processing on the target image.
7. The image processing system of claim 6, further comprising a face detection module, wherein the face detection module is configured to perform face detection on the target image after the target image is acquired; the process of the face detection module for carrying out face detection on the target image comprises the following steps:
acquiring multiple preset groups of scaling factors, and scaling the target image in an equal proportion by using the multiple groups of scaling factors to form an image pyramid;
scanning the image pyramid in the horizontal direction and the vertical direction by step length of equal scaling to obtain images of different layers in the image pyramid;
marking images of different layers by using a deep convolutional neural network, acquiring a face potential area in each image, and recording the coordinates and the size of the upper left corner of the face potential area as a response point;
and reversely mapping all the response points to the target image, fusing the overlapped potential human face areas, and performing human face detection on the target image according to a fusion result.
8. The image processing system of claim 6 or 7, further comprising a license plate detection module, wherein the license plate detection module is configured to perform license plate detection on the target image after acquiring the target image; the license plate detection module performs license plate detection on the target image, and the process comprises the following steps:
carrying out graying processing on the target image to obtain a corresponding grayscale image;
carrying out Gaussian blur on the gray level image, and filtering noise of the gray level image to obtain a noise-removed gray level image;
calculating a derivative of the denoising grayscale image in a first-order horizontal direction, and searching a vertical edge of the denoising grayscale image according to a derivative calculation result;
acquiring a binary threshold value of the denoising gray level image by using a self-adaptive threshold value algorithm, and generating a binary picture;
closing the binary picture, removing blank spaces between every two vertical edge lines, and connecting regions of all vertical edges to obtain a license plate candidate region;
and distinguishing the license plate candidate regions based on the aspect ratio of the outline circumscribed rectangle and the area of the license plate candidate regions so as to detect the license plate of the target image.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs that, when executed by the one or more processors, cause the electronic device to implement the image processing method of any one of claims 1 to 5.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to execute the image processing method according to any one of claims 1 to 5.
CN202210752477.XA 2022-06-28 2022-06-28 Image processing method, system, electronic equipment and readable storage medium Pending CN115115546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210752477.XA CN115115546A (en) 2022-06-28 2022-06-28 Image processing method, system, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210752477.XA CN115115546A (en) 2022-06-28 2022-06-28 Image processing method, system, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115115546A true CN115115546A (en) 2022-09-27

Family

ID=83329526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210752477.XA Pending CN115115546A (en) 2022-06-28 2022-06-28 Image processing method, system, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115115546A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278692A (en) * 2023-11-16 2023-12-22 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278692A (en) * 2023-11-16 2023-12-22 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients
CN117278692B (en) * 2023-11-16 2024-02-13 邦盛医疗装备(天津)股份有限公司 Desensitization protection method for monitoring data of medical detection vehicle patients

Similar Documents

Publication Publication Date Title
CN107274445B (en) Image depth estimation method and system
Negru et al. Exponential contrast restoration in fog conditions for driving assistance
CN109635656A (en) Vehicle attribute recognition methods, device, equipment and medium neural network based
CN112614136A (en) Infrared small target real-time instance segmentation method and device
CN109934781B (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN109214996A (en) A kind of image processing method and device
CN114127784A (en) Method, computer program product and computer readable medium for generating a mask for a camera stream
CN113688839B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN115115546A (en) Image processing method, system, electronic equipment and readable storage medium
CN115131229A (en) Image noise reduction and filtering data processing method and device and computer equipment
CN108090425B (en) Lane line detection method, device and terminal
CN108122209B (en) License plate deblurring method based on countermeasure generation network
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN116342519A (en) Image processing method based on machine learning
CN114283087A (en) Image denoising method and related equipment
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
CN112967321A (en) Moving object detection method and device, terminal equipment and storage medium
CN112699714B (en) Blind scene detection method for image and vehicle-mounted terminal
CN116311212B (en) Ship number identification method and device based on high-speed camera and in motion state
CN115861624B (en) Method, device, equipment and storage medium for detecting occlusion of camera
US20240177316A1 (en) Method for segmenting roads in images, electronic device, and storage medium
CN111507324B (en) Card frame recognition method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination