CN116137079A - Image processing method, device and equipment method - Google Patents

Image processing method, device and equipment method Download PDF

Info

Publication number
CN116137079A
CN116137079A CN202111369280.XA CN202111369280A CN116137079A CN 116137079 A CN116137079 A CN 116137079A CN 202111369280 A CN202111369280 A CN 202111369280A CN 116137079 A CN116137079 A CN 116137079A
Authority
CN
China
Prior art keywords
image
gray
detected
areas
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111369280.XA
Other languages
Chinese (zh)
Inventor
刘宗贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202111369280.XA priority Critical patent/CN116137079A/en
Publication of CN116137079A publication Critical patent/CN116137079A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, device and equipment, wherein the method comprises the following steps: acquiring a plurality of gray areas of an image to be detected; according to the pixel points in each gray scale area in the plurality of gray scale areas, three-dimensional coordinates of space points corresponding to the pixel points are obtained; acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points; and identifying the target object in the image to be detected according to the spatial gradient histogram. By the method, the image characteristics can be extracted more comprehensively, and the accuracy of image recognition is improved.

Description

Image processing method, device and equipment method
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and device.
Background
In recent years, pedestrian detection and face recognition are becoming more common in daily life, and computer vision has become a research hotspot for current students. The direction gradient histogram (Histogram of Oriented Gradient, HOG) feature is a feature description used for object detection in computer vision and image processing. HOG features are characterized by computing and counting the gradient direction histograms of local areas of the image.
The main disadvantages of the existing HOG feature implementation method are: the feature dimension is large, and the feature extraction sensitivity is low for pictures with higher pixels or pictures with long distances. The calculation amount is large, and the time complexity for processing the complex picture is high. The HOG feature extraction method has certain errors.
Disclosure of Invention
In view of the foregoing, embodiments of the present invention are directed to providing an image processing method, apparatus, and device that overcome or at least partially solve the foregoing problems.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including:
acquiring a plurality of gray areas of an image to be detected;
according to the pixel points in each gray scale area in the plurality of gray scale areas, three-dimensional coordinates of space points corresponding to the pixel points are obtained;
acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points;
and identifying the target object in the image to be detected according to the spatial gradient histogram.
According to another aspect of an embodiment of the present invention, there is provided an image processing apparatus including:
the acquisition module is used for acquiring a plurality of gray areas of the image to be detected;
the processing module is used for acquiring the three-dimensional coordinates of the space point corresponding to the pixel point according to the pixel point in each gray scale area in the plurality of gray scale areas; acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points; and identifying the target object in the image to be detected according to the spatial gradient histogram.
According to yet another aspect of an embodiment of the present invention, there is provided a computing device including: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image processing method.
According to still another aspect of the embodiments of the present invention, there is provided a computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the above-described image processing method.
According to the scheme provided by the embodiment of the invention, according to the pixel points in each gray scale area in the plurality of gray scale areas, the three-dimensional coordinates of the space points corresponding to the pixel points are obtained; acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points; identifying a target object in the image to be detected according to the spatial gradient histogram; therefore, under the condition of keeping the geometric and optical deformation invariance of the image, the image features can be extracted, the comprehensiveness of the image feature extraction is ensured, and the accuracy of image recognition is further improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific implementation of the embodiments of the present invention will be more apparent.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a flowchart of an image processing method provided by an embodiment of the present invention;
fig. 2 is a schematic diagram of an image cell to be detected in a flow of an image processing method according to another embodiment of the present invention;
fig. 3 is a schematic diagram of an image cell to be detected in a flow of an image processing method according to another embodiment of the present invention;
fig. 4 is a schematic diagram showing the structure of an image processing apparatus according to an embodiment of the present invention;
FIG. 5 illustrates a schematic diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 shows a flowchart of an image processing method provided by an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
step 11, acquiring a plurality of gray areas of an image to be detected;
step 12, according to the pixel point in each gray scale area in the plurality of gray scale areas, acquiring the three-dimensional coordinates of the space point corresponding to the pixel point;
step 13, acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points;
and step 14, identifying a target object in the image to be detected according to the spatial gradient histogram.
In the embodiment, gray areas of an image to be detected are obtained, color interference is avoided when image feature extraction is carried out, pixel points in each gray area are projected in a space coordinate system, three-dimensional coordinates of the space points are calculated, the image is processed in the three-dimensional space, geometrical and optical deformation invariance of the image is guaranteed, and certain prediction and calculation are carried out on a shielded object; according to the three-dimensional coordinates, calculating to obtain a spatial gradient histogram of the image to be detected, and extracting the characteristics of the image to be detected by using the spatial gradient histogram, so that the comprehensiveness of the characteristic extraction is ensured, and the recognition rate and accuracy of the target object are improved.
In an alternative embodiment of the present invention, step 11 may include:
step 111, obtaining an image to be detected;
step 112, performing gray processing on the image to be detected to obtain a gray image of the image to be detected;
and 113, performing segmentation processing on the gray level image to obtain a plurality of gray level areas.
In the embodiment, the image to be detected is subjected to graying treatment, color interference during feature extraction is avoided, and the graying image after the graying treatment is subjected to normalization treatment so as to adjust the contrast of the image to be detected, and the influence caused by information such as human body, background, illumination and the like in the image to be detected is reduced;
preferably, the processing is performed using Gamma compression formula, i.e. I (x, y, z) =I (x, y, z) gamma Wherein I is an image to be detected; and cutting the image to be detected after the graying and normalization treatment, and extracting each slice when extracting the characteristics, so as to ensure the comprehensiveness and accuracy of the characteristic extraction.
In an alternative embodiment of the present invention, step 113 may include:
step 1131, dividing the gray level image according to a first gray level threshold value to obtain a plurality of gray level areas; the value range of the first gray threshold is [0, 255].
In this embodiment, the gray image is segmented within the first gray threshold range, preferably, the information entropy is used for segmentation, and when the system is more chaotic and the uncertainty is larger, the information entropy is larger; when the system is more ordered and the certainty is higher, the information entropy is smaller;
here, the formula may be: h= - Σp (x) logp (x), where P (x) represents the frequency of occurrence of gray scale x and H represents information entropy.
In an alternative embodiment of the present invention, step 1131 may include:
step S1, taking a gray scale region with a gray scale value lower than a first gray scale threshold value of the gray scale image as a background region and taking a gray scale region with a gray scale value higher than the first gray scale threshold value as a target region;
step S2, calculating the proportion of each gray level in the background area and the target object area;
step S3, calculating the information entropy of each gray level in the background area and the target object area according to the proportion;
and S4, obtaining a plurality of gray level areas according to the sum of the information entropy of each gray level in the background area and the target object area and the preset maximum information entropy.
In this embodiment, the formula may be according to:
Figure BDA0003361818400000051
calculating the proportion of each gray level in the background area, wherein P B Representing the proportion of each gray level in the background region;
according to the formula:
Figure BDA0003361818400000052
calculating the proportion of each gray scale in the target object area, wherein P O Representing the proportion of each gray scale in the target object area, i represents a gray scale, takes a positive integer as a value, and P T Representing the sum of frequencies at which gray values appear in the gray image;
Figure BDA0003361818400000053
according to P B 、P O Information entropy in the background region and the target object region, i.e. +.>
Figure BDA0003361818400000054
Figure BDA0003361818400000055
Adding the information entropy of the two information entropy, and comparing the obtained information entropy sum with the maximum information entropy;
if the information entropy sum is larger than the maximum information entropy, the maximum information entropy is assigned to the maximum information entropy, i is set as a binarization threshold value, and if the information entropy sum is smaller than the maximum information entropy, the maximum information entropy is unchanged; alternatively, the initial value of the maximum information entropy here is set to-1.
In an alternative embodiment of the present invention, step 12 may include:
step 121, obtaining projection coordinates of a spatial point corresponding to a pixel point in each gray scale region in the plurality of gray scale regions;
and step 122, obtaining the three-dimensional coordinates of the space point corresponding to the pixel point according to the homogeneous coordinates of the projection points of the space point in different directions in the gray scale area and the homogeneous coordinates of the space point in a world coordinate system.
In the embodiment, the spatial coordinates of the pixel points are calculated according to the coordinates of the pixel points under the projection coordinate system, and for irregular objects with higher shielding degree, the characteristics of each pixel point can be more accurately calculated from the spatial coordinate system, so that the recognition accuracy of the target object is improved.
The above embodiments will be described below with reference to specific examples:
for the imaging position Q of any point Q in space on the image, Q is the intersection point of the optical center O and Q point connecting line OQ and the image plane, the following relationship exists:
Figure BDA0003361818400000061
Figure BDA0003361818400000062
wherein (X, y) is the image coordinates of q points, (X) c ,Y c ,Z c ) The coordinate of the space point Q under the projection coordinate system, f is the length of the connecting line OQ, and the relationship can be expressed as follows by homogeneous coordinates and a matrix:
Figure BDA0003361818400000063
substituting the formula (12.1) and the formula (12.2) into the above formula (12.3) can obtain the relation between the coordinate of the Q point and the coordinate (u, v) of the Q point:
Figure BDA0003361818400000064
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003361818400000065
m is a projection matrix of 3 rows and 4 columns, R is a rotation matrix, and t is a translation matrix;
assuming that the camera shoots the Q point from left and right directions, their projection points are Q respectively 1 And q 2 The cameras are fixed in position, and their projection matrices are M respectively 1 And M 2 Then, there are:
Figure BDA0003361818400000066
Figure BDA0003361818400000071
wherein, (u) 1 ,v 1 (u) and (1) 2 ,v 2 1) each represents q 1 And q 2 In the respective images, (X, Y, Z, 1) represents the homogeneous coordinates of Q in the world coordinate system,
Figure BDA0003361818400000072
denoted as M k The ith row and jth column element, Z c1 、Z c2 Representing a non-zero proportional term. The linear equations for X, Y, Z can be obtained by eliminating the proportional terms as follows:
Figure BDA0003361818400000073
Figure BDA0003361818400000074
Figure BDA0003361818400000075
Figure BDA0003361818400000076
in the four equations, the values of X, Y and Z can be obtained by a least square method, and the (X, Y and Z) is the space coordinate of the Q point.
In an alternative embodiment of the present invention, step 13 may include:
step 131, calculating the variation of the three-dimensional coordinates of the space points corresponding to the pixel points in the gray scale area and the three-dimensional coordinates of the space points corresponding to the adjacent pixel points;
step 132, obtaining an image gradient of the pixel point according to the variation of the three-dimensional coordinates;
and step 133, obtaining a spatial gradient histogram of the image to be detected according to the image gradient of the pixel point.
In this embodiment, the image gradient refers to the change rate of a pixel at a certain point of the image in the X, Y and Z directions compared with that of an adjacent pixel, and is a three-dimensional vector, which is composed of 3 components, and the change of the X axis, the change of the Y axis and the change of the Z axis; wherein the change in the X-axis refers to the pixel value to the right of the current pixel (X plus 1) minus the pixel value to the left of the current pixel (X minus 1), the change in the Y-axis refers to the pixel value in front of the current pixel (Y plus 1) minus the pixel value behind the current pixel (Y minus 1), and the change in the Z-axis refers to the pixel value below the current pixel (Z plus 1) minus the pixel value above the current pixel (Z minus 1). The 3 components are calculated to form a three-dimensional vector, resulting in an image gradient for the pixel. The image gradient can comprise a gradient direction and the gradient size, and the contour information of the image can be captured by calculating the pixel three-dimensional spatial gradient of the image, so that the interference of illumination is further weakened. Further, the arctangent arctan was taken to obtain the gradient angle. Image features are identified by adopting a three-dimensional space gradient computing mode, and for irregular objects with higher shielding degree, the features of each pixel point can be more accurately computed from a space coordinate system, so that the identification accuracy is improved.
Specifically, the gradient of pixel points (x, y, z) in the image represents:
gx (x, y, z) =h (x+1, y, z) -H (x-1, y, z) formula (13.1)
Gy (x, y, z) =h (x, y+1, z) -H (x, y-1, z) formula (13.2)
Gz (x, y, z) =h (x, y, z+1) -H (x, y, z-1) formula (13.3)
Wherein Gx (x, y, z) represents the gradient of the pixel point (x, y, z) in the input image in the x-axis direction, gy (x, y, z) represents the gradient in the y-axis direction, and Gz (x, y, z) represents the gradient in the z-axis direction.
The gradient magnitude at the pixel point (x, y, z) is:
Figure BDA0003361818400000081
the gradient direction is:
Figure BDA0003361818400000082
in an alternative embodiment of the present invention, step 14 may include:
step 141, dividing the image to be detected into a plurality of cells;
step 142, collecting directional gradient histogram features in the interval where the cells are spatially interconnected, and obtaining a directional gradient histogram feature set of the image to be detected;
and step 143, obtaining at least one target object according to the directional gradient histogram feature vector in the directional gradient histogram feature set of the image to be detected.
In this embodiment, the image is divided into a plurality of "cells" which facilitate setting different codes for different areas of the image, reducing the sensitivity of the algorithm to the morphology and appearance of the human body in the image. Each of the plurality of cells includes a plurality of pixels, and the histogram of the direction cells is used to count the feature information of one cell.
As shown in fig. 2 and 3, the image is divided into a plurality of "cells", each cell contains 6×6 pixels, the feature information of one cell is counted by using the histograms of 9 bins (directions), the gradient space direction of the cell is divided into 9 parts in 360 degrees without considering the positive and negative directions, each 20 degrees corresponds to one direction cell, and all gradient directions are divided into a feature vector containing 9 dimensions, that is, the gradient direction histogram corresponding to the cell is obtained.
Further, the spatial gradient histogram is normalized, so that the influence of factors such as illumination in an image is reduced; combining a plurality of cells into a space which is communicated with each other in space, so that the problem of large difference among the cells caused by the influence of the background brightness and the contrast of an image is avoided; and performing HOG feature collection on the intervals, wherein the HOG feature of one interval is a combination of all cell feature vectors in the interval in series.
Preferably, the function is normalized by L2-norm
Figure BDA0003361818400000091
Carrying out normalization processing, and carrying out normalization processing;
through the above process, a single product can be obtained
Figure BDA0003361818400000092
A high-dimensional vector of data, wherein β represents the number of direction (bin) units in each cell,/->
Figure BDA0003361818400000093
Representing the number of sections, η represents the number of cells in a section.
Finally, collecting HOG characteristics of all overlapped intervals in a detection window to obtain a feature set of the direction gradient histogram of the image to be detected;
and classifying and identifying the direction gradient histogram feature vectors in the direction gradient histogram feature set of the image to be detected to obtain at least one target object, for example, classifying the direction gradient histogram feature vectors meeting a preset condition into a class, and determining a target object according to the direction gradient histogram feature vectors of the class.
According to the embodiment of the invention, the gradient calculation is carried out on the image by using the three-dimensional vector space, and the characteristic extraction is carried out on the target object in the image to be detected under the condition of keeping the geometric and optical deformation invariance of the image, so that the characteristic extraction is good, the image characteristic can be extracted more comprehensively, and the accuracy of image identification is improved.
Fig. 4 shows a schematic structural diagram of an image processing apparatus 40 according to an embodiment of the present invention. As shown in fig. 4, the apparatus 40 includes:
an acquisition module 41, configured to acquire a plurality of gray areas of an image to be detected;
a processing module 42, configured to obtain three-dimensional coordinates of a spatial point corresponding to a pixel point according to the pixel point in each of the plurality of gray scale areas; acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points; and identifying the target object in the image to be detected according to the spatial gradient histogram.
Optionally, the obtaining module 41 is specifically configured to: acquiring an image to be detected, and carrying out gray processing on the image to be detected to obtain a gray image of the image to be detected; and carrying out segmentation processing on the gray level image to obtain a plurality of gray level areas.
Optionally, the dividing the gray scale image to obtain a plurality of gray scale areas includes:
dividing the gray level image according to a first gray level threshold value to obtain a plurality of gray level areas; the value range of the first gray threshold is [0, 255].
Optionally, according to a first gray threshold, the gray image is subjected to segmentation processing to obtain a plurality of gray areas, including:
taking a gray level region of the gray level image, the gray level value of which is lower than a first gray level threshold value, as a background region, and taking a gray level region of which is higher than the first gray level threshold value as a target region;
calculating the proportion of each gray scale in the background area and the target object area;
calculating information entropy of each gray level in the background area and the target object area according to the proportion;
and obtaining a plurality of gray level areas according to the sum of the information entropy of each gray level in the background area and the target object area and the preset maximum information entropy.
Optionally, the processing module 42 is specifically configured to: obtaining projection coordinates of space points corresponding to pixel points in each gray scale region in the plurality of gray scale regions;
and obtaining the three-dimensional coordinates of the space point corresponding to the pixel point according to the homogeneous coordinates of the projection points of the space point in different directions in the gray scale area and the homogeneous coordinates of the space point in a world coordinate system.
Optionally, the processing module 42 is specifically configured to: calculating the change quantity of the three-dimensional coordinates of the space points corresponding to the pixel points in the gray scale area and the three-dimensional coordinates of the space points corresponding to the adjacent pixel points respectively;
obtaining the image gradient of the pixel point according to the variation of the three-dimensional coordinates;
and obtaining a spatial gradient histogram of the image to be detected according to the image gradient of the pixel point.
Optionally, the processing module 42 is specifically configured to: dividing the image to be detected into a plurality of cells; performing directional gradient histogram feature collection in the interval where the cells are spatially communicated to obtain a directional gradient histogram feature set of the image to be detected; and obtaining at least one target object according to the directional gradient histogram feature vector in the directional gradient histogram feature set of the image to be detected.
It should be noted that, the apparatus 40 is an apparatus corresponding to the above method, and all implementation manners in the above method embodiments are applicable to the embodiment of the apparatus, so that the same technical effects can be achieved.
An embodiment of the present invention provides a computer storage medium storing at least one executable instruction that can perform the image processing method in any of the above method embodiments.
FIG. 5 illustrates a schematic diagram of a computing device according to an embodiment of the present invention, and the embodiment of the present invention is not limited to a specific implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor (processor), a communication interface (Communications Interface), a memory (memory), and a communication bus.
Wherein: the processor, communication interface, and memory communicate with each other via a communication bus. A communication interface for communicating with network elements of other devices, such as clients or other servers, etc. And a processor, configured to execute a program, and specifically may perform relevant steps in the image processing method embodiment for a computing device.
In particular, the program may include program code including computer-operating instructions.
The processor may be a central processing unit, CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included by the computing device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
And the memory is used for storing programs. The memory may comprise high-speed RAM memory or may further comprise non-volatile memory, such as at least one disk memory.
The program may be specifically adapted to cause a processor to execute the image processing method in any of the above-described method embodiments. The specific implementation of each step in the program may refer to the corresponding steps and corresponding descriptions in the units in the above image processing method embodiment, which are not repeated herein. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and modules described above may refer to corresponding procedure descriptions in the foregoing method embodiments, which are not repeated herein.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of embodiments of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the embodiments of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., an embodiment of the invention that is claimed, requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functionality of some or all of the components according to embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). Embodiments of the present invention may also be implemented as a device or apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the embodiments of the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. Embodiments of the invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (10)

1. An image processing method, the method comprising:
acquiring a plurality of gray areas of an image to be detected;
according to the pixel points in each gray scale area in the plurality of gray scale areas, three-dimensional coordinates of space points corresponding to the pixel points are obtained;
acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points;
and identifying the target object in the image to be detected according to the spatial gradient histogram.
2. The image processing method according to claim 1, wherein acquiring a plurality of gradation areas of the image to be detected includes:
acquiring an image to be detected;
carrying out gray scale processing on the image to be detected to obtain a gray scale image of the image to be detected;
and carrying out segmentation processing on the gray level image to obtain a plurality of gray level areas.
3. The image processing method according to claim 2, wherein the dividing the gradation image to obtain a plurality of gradation regions includes:
dividing the gray level image according to a first gray level threshold value to obtain a plurality of gray level areas; the value range of the first gray threshold is [0, 255].
4. The image processing method according to claim 3, wherein the dividing the gray image according to the first gray threshold value to obtain a plurality of gray areas comprises:
taking a gray level region of the gray level image, the gray level value of which is lower than a first gray level threshold value, as a background region, and taking a gray level region of which is higher than the first gray level threshold value as a target region;
calculating the proportion of each gray value in the background area and the target object area;
calculating information entropy of each gray value in the background area and the target object area according to the proportion;
and obtaining a plurality of gray areas according to the sum of the information entropy of each gray value in the background area and the target object area and the preset maximum information entropy.
5. The image processing method according to claim 1, wherein acquiring three-dimensional coordinates of a spatial point corresponding to a pixel point in each of the plurality of gradation areas from the pixel point, comprises:
obtaining projection coordinates of space points corresponding to pixel points in each gray scale region in the plurality of gray scale regions;
and obtaining the three-dimensional coordinates of the space point corresponding to the pixel point according to the homogeneous coordinates of the projection points of the space point in different directions in the gray scale area and the homogeneous coordinates of the space point in a world coordinate system.
6. The image processing method according to claim 1, wherein acquiring the spatial gradient histogram of the image to be detected based on the three-dimensional coordinates of the spatial points includes:
calculating the change quantity of the three-dimensional coordinates of the space points corresponding to the pixel points in the gray scale area and the three-dimensional coordinates of the space points corresponding to the adjacent pixel points respectively;
obtaining the image gradient of the pixel point according to the variation of the three-dimensional coordinates;
and obtaining a spatial gradient histogram of the image to be detected according to the image gradient of the pixel point.
7. The image processing method according to claim 1, wherein identifying the target object in the image to be detected based on the spatial gradient histogram, comprises:
dividing the image to be detected into a plurality of cells;
performing directional gradient histogram feature collection in the interval where the cells are spatially communicated to obtain a directional gradient histogram feature set of the image to be detected;
and obtaining at least one target object according to the directional gradient histogram feature vector in the directional gradient histogram feature set of the image to be detected.
8. An image processing apparatus, characterized in that the apparatus comprises:
the method comprises the steps of obtaining a module speed, wherein the module speed is used for obtaining a plurality of gray areas of an image to be detected;
the processing module is used for acquiring the three-dimensional coordinates of the space point corresponding to the pixel point according to the pixel point in each gray scale area in the plurality of gray scale areas; acquiring a spatial gradient histogram of the image to be detected according to the three-dimensional coordinates of the spatial points; and identifying the target object in the image to be detected according to the spatial gradient histogram.
9. A computing device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction, where the executable instruction causes the processor to perform the operations corresponding to the image processing method according to any one of claims 1 to 7.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image processing method of any one of claims 1-7.
CN202111369280.XA 2021-11-18 2021-11-18 Image processing method, device and equipment method Pending CN116137079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111369280.XA CN116137079A (en) 2021-11-18 2021-11-18 Image processing method, device and equipment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111369280.XA CN116137079A (en) 2021-11-18 2021-11-18 Image processing method, device and equipment method

Publications (1)

Publication Number Publication Date
CN116137079A true CN116137079A (en) 2023-05-19

Family

ID=86326918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111369280.XA Pending CN116137079A (en) 2021-11-18 2021-11-18 Image processing method, device and equipment method

Country Status (1)

Country Link
CN (1) CN116137079A (en)

Similar Documents

Publication Publication Date Title
US11854173B2 (en) System and method for finding lines in an image with a vision system
CN107358149B (en) Human body posture detection method and device
US11699283B2 (en) System and method for finding and classifying lines in an image with a vision system
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN111507908B (en) Image correction processing method, device, storage medium and computer equipment
CN112348778B (en) Object identification method, device, terminal equipment and storage medium
CN111860060A (en) Target detection method and device, terminal equipment and computer readable storage medium
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
CN111882565B (en) Image binarization method, device, equipment and storage medium
CN114862929A (en) Three-dimensional target detection method and device, computer readable storage medium and robot
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN113228105A (en) Image processing method and device and electronic equipment
CN112101134B (en) Object detection method and device, electronic equipment and storage medium
CN114758145A (en) Image desensitization method and device, electronic equipment and storage medium
CN111062927A (en) Method, system and equipment for detecting image quality of unmanned aerial vehicle
CN108960246B (en) Binarization processing device and method for image recognition
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN112733650A (en) Target face detection method and device, terminal equipment and storage medium
CN112150522A (en) Remote sensing image registration method, device, equipment, storage medium and system
CN117253022A (en) Object identification method, device and inspection equipment
CN114708230B (en) Vehicle frame quality detection method, device, equipment and medium based on image analysis
CN113239738B (en) Image blurring detection method and blurring detection device
CN116137079A (en) Image processing method, device and equipment method
CN112329729B (en) Small target ship detection method and device and electronic equipment
CN112101139B (en) Human shape detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination