CN115187549A - Image gray processing method, device, equipment and storage medium - Google Patents

Image gray processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115187549A
CN115187549A CN202210815026.6A CN202210815026A CN115187549A CN 115187549 A CN115187549 A CN 115187549A CN 202210815026 A CN202210815026 A CN 202210815026A CN 115187549 A CN115187549 A CN 115187549A
Authority
CN
China
Prior art keywords
image
target
calibration
gray processing
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210815026.6A
Other languages
Chinese (zh)
Inventor
宋依岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202210815026.6A priority Critical patent/CN115187549A/en
Publication of CN115187549A publication Critical patent/CN115187549A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image gray processing method, an image gray processing device, image gray processing equipment and a storage medium, wherein the method comprises the following steps: acquiring a target image; determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on a target image; extracting image features from the processed image, and performing image recognition based on the image features; and under the condition that the identification result does not meet the preset condition, determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image. According to the method and the device, the gray processing algorithm is replaced to perform recognition again when the recognition result does not meet the preset condition, so that the gray processing algorithm adopted for performing gray processing on the target image is not single any more, but corresponding adjustment can be performed based on the recognition result of the target image, the problem that image features are lost due to the adoption of the single gray processing algorithm can be avoided, and the accuracy of image recognition is improved.

Description

Image gray processing method, device and equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing image gray scales.
Background
The image recognition technology generally involves the graying process of an image, which is a process of converting a color image into a grayscale image through a grayscale processing algorithm. Image features can be extracted from the gray level image, and the target object can be identified by using the image features. The selection of the gray scale processing algorithm affects the effect of feature extraction, and further affects the accuracy of image recognition.
The gray processing algorithm commonly used at present is a weighted average algorithm close to human vision, and under the condition that the shooting environment of the image to be recognized, such as a field, light, weather and the like, changes, if one gray processing algorithm is still uniformly adopted for gray processing, the loss of partial image features can be caused, so that the recognition accuracy is influenced.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides an image gray processing method, an image gray processing device, an image gray processing apparatus and a storage medium.
According to a first aspect of embodiments of the present application, there is provided an image gray scale processing method, including:
acquiring a target image;
determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image;
extracting image features from the processed image, and performing image recognition based on the image features;
and under the condition that the identification result does not meet the preset condition, determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image.
According to a second aspect of embodiments of the present application, there is provided a camera calibration method, the method including:
acquiring a calibration image of the camera, wherein the calibration image has a calibration target point;
performing gray scale processing on the calibration image based on the method of the first aspect;
acquiring a target identification result meeting a preset condition;
and determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
According to a third aspect of embodiments of the present application, there is provided a camera calibration method, the method including: acquiring a calibration image of the camera, wherein the calibration image has a calibration target point;
performing gray scale processing on the calibration image based on the method of the first aspect;
under the condition that the identification result corresponding to each gray processing algorithm does not meet the preset condition, taking the identification result with the highest proportion of the number of the identified target spots as a target identification result;
and determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
According to a fourth aspect of embodiments of the present application, there is provided an image gradation processing apparatus comprising:
the first acquisition module is used for acquiring a target image;
the first determining module is used for determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image;
the characteristic extraction module is used for extracting image characteristics from the processed image and carrying out image identification based on the image characteristics;
and the second determining module is used for determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image under the condition that the identification result does not meet the preset condition.
According to a fifth aspect of the embodiments of the present application, there is provided a camera calibration apparatus, including:
the first acquisition module is used for acquiring a calibration image of the camera, and the calibration image has a calibration target point;
a first processing module, configured to perform gray scale processing on the calibration image based on the method of the first aspect;
the second acquisition module is used for acquiring a target identification result meeting a preset condition;
and the third determining module is used for determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
According to a sixth aspect of the embodiments of the present application, there is provided a camera calibration apparatus, including:
the second acquisition module is used for acquiring a calibration image of the camera, and the calibration image has a calibration target point;
a second processing module, configured to perform gray scale processing on the calibration image based on the method of the first aspect;
the fourth determining module is used for taking the recognition result with the highest proportion of the number of the identified target points as a target recognition result under the condition that the recognition result corresponding to each gray processing algorithm does not meet the preset condition;
and the fifth determining module is used for determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
According to a seventh aspect of embodiments of the present application, there is provided an electronic device, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of the first, second, or third aspects when executing the program.
According to an eighth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program for instructing associated hardware to perform the method of any one of the first, second or third aspects.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the method and the device, the gray processing algorithm is replaced to be identified again when the identification result does not meet the preset condition in the image identification process, so that the gray processing algorithm adopted for carrying out gray processing on the target image is not single any more, but corresponding adjustment can be carried out based on the identification result of the target image, the problem that image features are lost due to the adoption of the single gray processing algorithm can be avoided, and the accuracy of image identification is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1A is a flowchart illustrating an image gray scale processing method according to an exemplary embodiment of the present application.
Fig. 1B is a schematic diagram of a calibration image for camera calibration shown in the present application.
Fig. 1C is a flow chart illustrating a camera calibration method according to an exemplary embodiment of the present application.
Fig. 1D is a schematic diagram of another calibration image for camera calibration shown in the present application.
Fig. 1E is a flow chart illustrating a camera calibration method according to another exemplary embodiment of the present application.
Fig. 2A is a block diagram of an image grayscale processing device according to an exemplary embodiment.
Fig. 2B is a block diagram of a camera calibration apparatus according to an exemplary embodiment of the present application.
Fig. 2C is a block diagram of a camera calibration apparatus according to an exemplary embodiment of the present application.
Fig. 3 is a hardware structure diagram of an electronic device in which an image gray scale processing apparatus according to an exemplary embodiment of the present application is shown.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
Image recognition technology is widely used in various scenes, such as face detection, face recognition, obstacle detection in an automatic driving scene, camera head identification, and the like. Image recognition generally involves a grayscaling process for an image, and the amount of computation in the subsequent image processing can be reduced in a grayscaled image while retaining the characteristics such as the overall and local chromaticity and luminance level distribution of the image. In addition, the image after the graying processing is used as a basis for further image recognition, and the selection of the grayscale processing algorithm can influence the effect of feature extraction, thereby influencing the accuracy of image recognition.
It is understood that the shooting environment when image recognition is performed in different scenes is not constant, such as the weather, light, time, field and the like when the image to be recognized is collected, and these factors may cause the same target object to present different characteristics in different shooting environments. In order to eliminate the influence of the shooting environment on the identification accuracy, the processing mode of the image to be identified should be adjusted. However, the current image recognition technology ignores the influence of the gray level processing algorithm on the recognition accuracy, adopts a uniform gray level processing algorithm (a weighted average algorithm close to human vision) aiming at images acquired under different shooting environments, and does not adjust along with the shooting environments. The inappropriate gray level processing method may cause the gray level difference between the characteristic point of the target object and other characteristic points in the image to be small, so that the target object cannot be distinguished in the subsequent image processing, and the identification accuracy is reduced.
Taking a scene calibrated by a vehicle-mounted camera as an example, a calibration image is usually a black and white checkerboard, and calibration is completed by identifying intersection points of the black and white checkerboard. When the camera is replaced, the camera is usually calibrated in a 4S shop providing after-sales service, and since the calibration environments of the shops are not consistent, the same gray processing method is adopted to perform gray processing on the calibration image, which may cause the gray values of the black grids and the white grids in the calibration image to be close to each other and cannot be distinguished, thereby affecting the accuracy of identification. For example, in the detection of an obstacle around a vehicle in a cloudy scene, since light is weak in the cloudy day, a situation that the color and brightness of the obstacle are close to those of a road surface may occur in an acquired image to be recognized, and if a gray processing method used in a sunny scene is adopted, a situation that the obstacle and the road surface cannot be distinguished may be caused, so that the obstacle is missed to be detected.
In order to overcome the problems in the related art, the application provides an image gray processing method, which comprises the steps of firstly selecting one of at least two gray processing algorithms to process and identify a target image, and replacing the gray processing algorithm to identify again when an identification result does not meet a preset condition. The gray processing algorithm adopted for carrying out gray processing on the target image is not single any more, and corresponding adjustment can be carried out based on the identification result of the target image, so that the problem of image characteristic loss caused by adopting the single gray processing algorithm can be avoided, and the accuracy of image identification is improved. The image gray processing method can be suitable for various scenes needing image recognition. Next, examples of the present application will be described in detail.
As shown in fig. 1A, fig. 1A is a flowchart of an image gray scale processing method shown in the present application according to an exemplary embodiment, which includes the following steps:
102, acquiring a target image;
the target image can be acquired in real time through a camera, for example, the camera head acquires a calibration image by shooting a calibration plate at regular time, and the image of the surrounding environment of the vehicle is shot in the running process of the vehicle; or may be a non-real time captured image obtained from an existing database.
104, determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on a target image;
the gray processing of the image is a process of converting the color image into a gray image through a gray processing algorithm, and the gray value of each pixel point is calculated based on the values of three components of R, G and B of each pixel point. The gray scale processing algorithm provides a formula for calculating gray scale values, such as a commonly used weighted average gray scale processing algorithm close to human eye vision, different weights are respectively set for three components of R, G and B based on the sensitivity of human eyes to different colors and are summed to determine the gray scale values, and the calculation formula is expressed as: gray = R0.299 + G0.587 + B0.114. In addition, common gray scale processing algorithms also include a floating point gray scale processing algorithm, an integer gray scale processing algorithm, a single channel gray scale processing algorithm, and the like. The single-channel gray processing algorithm is three different algorithms of taking the value of any one of R, G and B as a gray value and taking R, G or B as a component. In the embodiment of the present application, the grayscale processing method used for performing grayscale processing on an image may be any one of the grayscale processing algorithms described above.
Step 106, extracting image characteristics from the processed image, and performing image recognition based on the image characteristics;
in order to extract the features of the image, the image after the graying process may be binarized, and the gray value of the pixel point in the image with the gray value higher than the threshold is adjusted to 255 and the gray value of the pixel point lower than the threshold is adjusted to 0 by setting a proper threshold, so as to reflect the overall or local features of the image. And then, selecting a proper algorithm based on the characteristics of the target to be identified to extract image characteristics of the image after binarization processing, for example, an intersection point of lines in the image can be extracted through an angular point detection algorithm, or a certain area in the image can be extracted through a spot detection algorithm.
And 108, determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image under the condition that the identification result does not meet the preset condition.
The recognition result and the preset condition may be set by the user based on the specific application scenario of the image recognition. Taking a camera calibration scene as an example, the target image is a black and white checkerboard calibration image (as shown in fig. 1B), the target to be recognized is a target point 110 (an intersection of a black square and a white square) therein, the recognition result is generally a pixel coordinate of the recognized target point 110, and the number of the recognized target points 110 can be determined based on the recognition result. The pixel coordinates of the target points are used for determining the configuration parameters of the camera, and the more the number of the identified target points is, the higher the accuracy of the determined configuration parameters is. In consideration of the precision of the configuration parameters, the preset condition may be set based on the number of identified target points, for example, the number of identified target points satisfies a preset threshold, or the ratio of the number of identified target points to the total number of target points in the calibration image satisfies a preset threshold. The preset threshold value can be set on the basis of requirements, and the value of the preset threshold value can be properly reduced in consideration of the fact that some target points in a certain area in a target image can not be detected due to factors such as light or camera pollution.
For example, in a face detection scene, the target image may be a group of people, the image features are used to represent key points of the face, and the recognition result may be an image in which the face is marked or the number of detected faces. The preset condition may be that the proportion of the number of detected faces to the total number of faces in the target image satisfies a preset threshold, and the proportion of the number of detected faces to the total number can reflect the accuracy of face detection.
It should be noted that the above two scenarios are only examples and are not limiting to the method embodiment of the present application.
And under the condition that the identification result meets the preset condition, performing gray processing on the target image by using other gray processing algorithms, taking the gray processing algorithm corresponding to the identification result as the target gray processing algorithm, and taking the identification result as the target identification result. Under the condition, the obtained identification result can meet the requirement, so that the identification accuracy is ensured.
Under the condition that the recognition results corresponding to each preset gray scale processing algorithm do not meet the preset conditions, the recognition results can be compared and the optimal recognition result can be determined, the gray scale processing algorithm corresponding to the optimal recognition result is used as the target gray scale processing algorithm, and the optimal recognition result is used as the target recognition result. For example, in a camera calibration scene, a gradation processing algorithm corresponding to a recognition result in which the ratio of the number of target points to be recognized is the highest may be used as the target gradation processing algorithm. In this case, even if the obtained recognition result cannot satisfy the requirement, a recognition result with relatively high recognition accuracy can be obtained by comparison, and the effect of improving the recognition accuracy is also obtained.
In view of the foregoing scenario in which the image gray processing method of the present application is applied to camera calibration, the present application further provides a camera calibration method, as shown in fig. 1C, including the following steps:
step 112, collecting a calibration image of the camera, wherein the calibration image has a calibration target point;
the commonly used calibration images include, in addition to the black and white checkerboard shown in fig. 1B, a black and white wave point image shown in fig. 1D, where the center of the black wave point is the calibration point. The calibration environment of the camera is not fixed, for example, the calibration of the vehicle-mounted camera can be performed in a car factory or a 4S shop.
Step 114, determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the calibration image, extracting image features from the processed image, and performing image recognition based on the image features;
step 116, determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the calibration image under the condition that the identification result does not meet the preset condition;
the preset condition may be that the ratio of the number of the identified target points to the total number of the target points in the calibration image satisfies a preset threshold, and the specific implementation process of performing the gray-scale processing on the calibration image is described in the foregoing description of the embodiment of the image gray-scale processing method, and is not described herein again.
Step 118, acquiring a target identification result meeting a preset condition;
and step 120, determining camera configuration parameters based on the target identification result and the three-dimensional coordinates of the target points in the world coordinate system.
The target recognition result is generally the pixel coordinates of the recognized target point, and the camera configuration parameters include internal parameters and external parameters of the camera, wherein the internal parameters are used for representing the transformation mode between the camera coordinate system and the pixel coordinate system, and the external parameters are used for representing the transformation mode between the camera coordinate system and the world coordinate system. A transformation matrix from a three-dimensional space to a two-dimensional imaging plane can be obtained by determining configuration parameters of the camera, so that the distance between an object in a shot image and the camera can be judged based on the transformation matrix. For an in-vehicle camera, the distance between the vehicle and the surroundings can be determined.
The gray processing method adopted by the camera calibration process for the calibration image is not single any more, but the shooting environment of the calibration image is considered, and corresponding adjustment can be made based on the identification result of the calibration image, so that the camera calibration can be performed in different environments, and the most suitable gray processing algorithm is selected to ensure the accuracy of identifying the target point. On the basis, the accuracy of a calibration result (camera configuration parameters) is also ensured.
In addition, in consideration of the fact that the recognition result obtained after the preset processing of each gray processing algorithm does not meet the preset condition, the present application further provides a camera calibration method, as shown in fig. 1E, including the following steps:
step 122, collecting a calibration image of the camera, wherein the calibration image has a calibration target point;
step 124, determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the calibration image, extracting image features from the processed image, and performing image recognition based on the image features;
step 126, determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the calibration image under the condition that the identification result does not meet the preset condition;
the recognition result is generally the pixel coordinates of the recognized target points, and the preset condition may be that the ratio of the number of the recognized target points to the total number of the target points in the calibration image satisfies a preset threshold.
Step 128, under the condition that the identification result corresponding to each gray processing algorithm does not meet the preset condition, taking the identification result with the highest proportion of the number of the identified target points as a target identification result;
and step 130, determining camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
The functions and specific implementation processes of each step refer to the description of the corresponding steps of the foregoing embodiment of the camera calibration method, and are not described herein again. The application objects of the camera calibration method provided by the application include, but are not limited to, vehicle-mounted cameras or unmanned aerial vehicle cameras.
Corresponding to the embodiment of the method, the application also provides an image gray scale processing device. As shown in fig. 2A, fig. 2A is a block diagram of an image gray scale processing apparatus 200 according to an exemplary embodiment, the apparatus comprising:
a first obtaining module 201, configured to obtain a target image;
the first determining module 202 is configured to determine a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image;
a feature extraction module 203, configured to extract image features from the processed image, and perform image recognition based on the image features;
and the second determining module 204 is configured to determine, from at least two preset gray processing algorithms, a second gray processing algorithm to perform gray processing on the target image when the recognition result does not satisfy the preset condition.
Fig. 2B is a block diagram of a camera calibration apparatus 210 according to an exemplary embodiment of the present application, where the apparatus includes:
the first acquisition module 211 is configured to acquire a calibration image of the camera, where the calibration image has a calibration target point;
a first processing module 212, configured to perform gray processing on the calibration image based on the foregoing image gray processing method;
a second obtaining module 213, configured to obtain a target identification result meeting a preset condition;
and a third determining module 214, configured to determine the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target point in the world coordinate system.
Fig. 2C is a block diagram of another camera calibration apparatus 220 shown in the present application according to another exemplary embodiment, the apparatus includes:
the second acquisition module 221 is used for acquiring a calibration image of the camera, and the calibration image has a calibration target point;
a second processing module 222, configured to perform gray processing on the calibration image based on the foregoing image gray processing method;
a fourth determining module 223, configured to, when the recognition result corresponding to each gray processing algorithm does not meet the preset condition, take the recognition result with the highest proportion of the number of recognized target points as the target recognition result;
and a fifth determining module 224, configured to determine the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target point in the world coordinate system.
The implementation process of the functions and actions of each module in the above device is detailed in the implementation process of the corresponding steps in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement without inventive effort.
The embodiments of the image gray processing apparatus and the camera calibration apparatus in the present document can be installed on an electronic device. The apparatus embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory through the processor and running the computer program instructions. In terms of hardware, as shown in fig. 3, a hardware structure diagram of an electronic device 300 in which an image grayscale processing apparatus is located in this embodiment is shown, and besides the processor 310, the memory 330, the network interface 320, and the nonvolatile memory 340 shown in fig. 3, the electronic device in which the apparatus 331 is located in the embodiment may also include other hardware according to an actual function of the electronic device, which is not described again. The hardware structure of the electronic device where the camera calibration apparatus is located is similar to that of the electronic device 300.
Accordingly, the present application also provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method according to any of the foregoing method embodiments when executing the program. The present application further provides a computer-readable storage medium storing a computer program for instructing associated hardware to implement the method according to any of the foregoing method embodiments.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (15)

1. An image gradation processing method, comprising:
acquiring a target image;
determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image;
extracting image features from the processed image, and performing image recognition based on the image features;
and under the condition that the identification result does not meet the preset condition, determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image.
2. The method of claim 1, wherein the grayscale processing algorithm comprises one of: a floating point gray scale processing algorithm, an integer gray scale processing algorithm, a weighted average gray scale processing algorithm, or a single channel gray scale processing algorithm.
3. The method of claim 1, wherein the target image is a calibration image of a camera, and the image features are used to characterize a target point in the calibration image.
4. The method of claim 3, wherein the recognition result is pixel coordinates of the identified target point.
5. The method according to claim 4, wherein the preset condition is that a ratio of the number of the identified target points to the total number of target points in the calibration image satisfies a preset threshold.
6. The method of claim 5, further comprising:
and if the identification result corresponding to each gray scale processing algorithm does not meet the preset condition, taking the gray scale processing algorithm corresponding to the identification result with the highest proportion of the number of the identified target spots as a target gray scale processing algorithm.
7. A camera calibration method, comprising:
acquiring a calibration image of the camera, wherein the calibration image has a calibration target point;
performing gray scale processing on the calibration image based on the method of claim 1;
acquiring a target identification result meeting a preset condition;
and determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
8. The method according to claim 7, wherein the preset condition is that a ratio of the number of the identified target points to the total number of target points in the calibration image satisfies a preset threshold.
9. A camera calibration method, comprising:
acquiring a calibration image of the camera, wherein the calibration image has a calibration target point;
performing gray scale processing on the calibration image based on the method of claim 1;
under the condition that the identification result corresponding to each gray processing algorithm does not meet the preset condition, taking the identification result with the highest proportion of the number of the identified target spots as a target identification result;
and determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
10. The method according to claim 9, wherein the preset condition is that a ratio of the number of the identified target points to the total number of target points in the calibration image satisfies a preset threshold.
11. An image gradation processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a target image;
the first determining module is used for determining a first gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image;
the characteristic extraction module is used for extracting image characteristics from the processed image and carrying out image recognition based on the image characteristics;
and the second determining module is used for determining a second gray processing algorithm from at least two preset gray processing algorithms to perform gray processing on the target image under the condition that the identification result does not meet the preset condition.
12. A camera calibration device is characterized by comprising:
the first acquisition module is used for acquiring a calibration image of the camera, and the calibration image has a calibration target point;
a first processing module, configured to perform a grayscale processing on the calibration image based on the method of claim 1;
the second acquisition module is used for acquiring a target identification result meeting a preset condition;
and the third determining module is used for determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
13. A camera calibration device is characterized by comprising:
the second acquisition module is used for acquiring a calibration image of the camera, and the calibration image has a calibration target point;
a second processing module, configured to perform a grayscale processing on the calibration image based on the method of claim 1;
the fourth determining module is used for taking the recognition result with the highest proportion of the number of the identified target points as a target recognition result under the condition that the recognition result corresponding to each gray processing algorithm does not meet the preset condition;
and the fifth determining module is used for determining the camera configuration parameters based on the target recognition result and the three-dimensional coordinates of the target points in the world coordinate system.
14. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 10 when executing the program.
15. A computer readable storage medium having stored thereon a computer program for instructing associated hardware to perform the method of any one of claims 1 to 10.
CN202210815026.6A 2022-07-11 2022-07-11 Image gray processing method, device, equipment and storage medium Pending CN115187549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210815026.6A CN115187549A (en) 2022-07-11 2022-07-11 Image gray processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210815026.6A CN115187549A (en) 2022-07-11 2022-07-11 Image gray processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115187549A true CN115187549A (en) 2022-10-14

Family

ID=83516873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210815026.6A Pending CN115187549A (en) 2022-07-11 2022-07-11 Image gray processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115187549A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218586A (en) * 2023-09-21 2023-12-12 北京市自来水集团有限责任公司技术研究院 Image recognition method, device, equipment and medium for measuring sedimentation velocity of suspended matters
WO2024131499A1 (en) * 2022-12-22 2024-06-27 华为云计算技术有限公司 Data analysis system, method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131499A1 (en) * 2022-12-22 2024-06-27 华为云计算技术有限公司 Data analysis system, method and device
CN117218586A (en) * 2023-09-21 2023-12-12 北京市自来水集团有限责任公司技术研究院 Image recognition method, device, equipment and medium for measuring sedimentation velocity of suspended matters

Similar Documents

Publication Publication Date Title
CN111308448B (en) External parameter determining method and device for image acquisition equipment and radar
CN115187549A (en) Image gray processing method, device, equipment and storage medium
US7382902B2 (en) Evaluation of the definition of an eye iris image
CN110197185B (en) Method and system for monitoring space under bridge based on scale invariant feature transform algorithm
Pan et al. No-reference assessment on haze for remote-sensing images
CN114076936A (en) Precision evaluation method and device of combined calibration parameters, server and computer readable storage medium
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN111723805A (en) Signal lamp foreground area identification method and related device
JP2011165170A (en) Object detection device and program
CN107491714B (en) Intelligent robot and target object identification method and device thereof
CN104851102B (en) A kind of infrared small target detection method based on human visual system
CN112396016B (en) Face recognition system based on big data technology
US20160140402A1 (en) Method and system for classifying painted road markings in an automotive driver-vehicle-asistance device
CN114092850A (en) Re-recognition method and device, computer equipment and storage medium
KR102618580B1 (en) Nighttime low-light image enhancement method based on retinex and atmospheric light estimation, apparatus and computer program for performing the method
CN114677670B (en) Method for automatically identifying and positioning identity card tampering
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN113450335B (en) Road edge detection method, road edge detection device and road surface construction vehicle
CN112364693B (en) Binocular vision-based obstacle recognition method, device, equipment and storage medium
JPH1151611A (en) Device and method for recognizing position and posture of object to be recognized
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN114550124A (en) Method for detecting obstacle in parking space and storage medium
CN104680547B (en) A kind of Penetrating Fog algorithm method of discrimination and device
CN115063594B (en) Feature extraction method and device based on automatic driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination