WO2022237811A1 - 图像处理方法、装置及设备 - Google Patents

图像处理方法、装置及设备 Download PDF

Info

Publication number
WO2022237811A1
WO2022237811A1 PCT/CN2022/092081 CN2022092081W WO2022237811A1 WO 2022237811 A1 WO2022237811 A1 WO 2022237811A1 CN 2022092081 W CN2022092081 W CN 2022092081W WO 2022237811 A1 WO2022237811 A1 WO 2022237811A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target image
pixel
image block
gradient direction
Prior art date
Application number
PCT/CN2022/092081
Other languages
English (en)
French (fr)
Inventor
陈铭津
陈秋伯
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022237811A1 publication Critical patent/WO2022237811A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • Embodiments of the present disclosure relate to the technical field of image processing, and in particular, to an image processing method, device, device, storage medium, computer program product, and computer program.
  • Object Detection technology also known as target extraction technology, is to identify the target object with a certain shape or posture in the image through image recognition technology.
  • Dense repetitive texture is a texture that repeats according to certain rules in a certain area of the image (such as stripes, houndstooth pattern, zebra pattern, etc.). Objects have significance.
  • the existing dense repetitive texture in the recognition image usually adopts the line-based detection method, that is, when a large number of straight lines with the same direction are identified in a certain area of the image, it is determined that the area is densely repetitive texture in the image. occupied area.
  • the detection method based on a straight line has high requirements on image noise and brightness.
  • image noise is large or the brightness of the image is low, the straight line in the image cannot be accurately recognized, and the area occupied by dense repetitive textures in the image cannot be effectively recognized, which affects the robustness of recognition.
  • Embodiments of the present disclosure provide an image processing method, device and equipment, storage medium, computer program product, and computer program, which can overcome the detection results that are prone to appear when the image noise is large or the brightness of the image is low in the prior art. Errors, can not effectively identify areas with dense repeated textures in the image, which affects the robustness of detection.
  • an embodiment of the present disclosure provides an image processing method, including:
  • an image processing device including:
  • An acquisition module configured to acquire a target image
  • a processing module configured to, for the image block in the target image, determine the distribution state of the gradient direction of each pixel in the image block on the chromaticity;
  • the first identification module is configured to determine that there is a dense repetitive texture in the image block if there is a gradient direction interval where the number of pixels is concentrated in the distribution state, and determine the dense repetitive texture in the target image based on the image block Occupies the first image area.
  • an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
  • the memory stores computer-executable instructions
  • the processor executes the computer-executable instructions stored in the memory, so that the at least one processor executes the image processing method as described in the first aspect above.
  • an embodiment of the present disclosure provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the processor executes the computer-executable instructions, the above-mentioned first aspect is implemented. image processing method.
  • an embodiment of the present disclosure provides a computer program product, including a computer program.
  • the computer program is executed by a processor, the image processing method described in the first aspect above is implemented.
  • an embodiment of the present disclosure provides a computer program.
  • the computer program is executed by a processor, the image processing method described in the first aspect above is implemented.
  • the method obtains a target image;
  • the distribution state of the gradient direction if there is a gradient direction interval in which the number of pixels is concentrated in the distribution state, it is determined that there is a dense repetitive texture in the image block, and based on the image block, the first image area occupied by the dense repetitive texture in the target image is determined, It can avoid the influence of image noise and brightness, effectively identify areas with dense repetitive textures in the image, and improve the robustness of image recognition in dense repetitive texture areas.
  • FIG. 1 is a schematic diagram of a scene of an image processing method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a histogram of gradient direction distribution of a certain pixel block provided by an embodiment of the present disclosure
  • FIG. 4 is a histogram of gradient direction distribution of another pixel block provided by an embodiment of the present disclosure.
  • Fig. 5 is a schematic diagram of the first image area occupied by densely repeated textures in the target image
  • FIG. 6 is a schematic diagram of a second image area occupied by densely repeated textures in the target image
  • Fig. 7 is a schematic diagram of the final image area occupied by densely repeated textures in the target image determined according to Fig. 5 and Fig. 6;
  • FIG. 8 is a structural block diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present disclosure.
  • the detection of dense repetitive texture areas in images mainly includes two implementation methods: first, use deep learning to train a learning model (such as a neural network model) through a large amount of image data, and then use the trained model to predict the image Whether there is a dense repetitive texture in the image, and the position of the dense repetitive texture area in the image.
  • a learning model such as a neural network model
  • the traditional straight line detection method is used, that is, when a large number of straight lines with the same direction are detected in a local area of the image, it is determined that there is dense repetitive texture in the image.
  • the embodiment of the present disclosure provides an image processing method, by dividing the image into image blocks, and determining the distribution state of the gradient direction of each pixel in each image block on the chromaticity, if the distribution There is a gradient direction interval where the number of pixels is concentrated in the state, and it is determined that there is a dense repetitive texture in the image block, and then the image area occupied by the dense repetitive texture in the image can be determined.
  • the image processing method does not require a complex training model process, which improves the processing efficiency; it can also avoid the influence of image noise and brightness, and improves the robustness of image detection in dense repetitive texture areas.
  • FIG. 1 is a schematic diagram of a scene of an image processing method provided by an embodiment of the present disclosure.
  • the system provided by this embodiment includes a terminal 101 and a server 102 .
  • the terminal 101 may be a device such as a mobile phone, a tablet computer, and a personal computer.
  • the implementation of the terminal 101 is not particularly limited, as long as the terminal 101 can perform data or information input and output interactions with the server 102 .
  • the server 102 may be a server or a cluster composed of several servers.
  • FIG. 2 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • the method of this embodiment can be applied to the terminal shown in FIG. 1, and can also be applied to the server shown in FIG. 1.
  • This disclosure does not make any limitations.
  • the image processing method includes:
  • the target image may be an image taken by the terminal, or an image received by the terminal and sent by other devices.
  • the target image may also be an image received by the server and sent by a terminal or other devices.
  • the target image may be an image of shutters, or an image of a zebra crosswalk or the like.
  • the target image may be divided into M ⁇ N rectangular image blocks, where M and N are both positive integers.
  • each image block in the target image may be independent of each other, or may overlap with each other.
  • the overlapping ratio of the image blocks can be adjusted according to actual calculation and effect conditions. It can be understood that the larger the overlapping ratio of image blocks and the larger the number of image blocks, the greater the calculation consumption of the terminal or server and the higher the recognition accuracy. The smaller the overlapping ratio of image blocks and the fewer the number of image blocks, the smaller the calculation consumption of the terminal or server, and the lower the recognition accuracy.
  • the distribution state of the gradient direction of each pixel point on the chromaticity can be calculated according to the chromaticity parameter of each pixel point in the image block.
  • the gradient direction is also referred to as the gradient angle.
  • this pixel block is determined as an image block with dense repetitive texture.
  • a set of image regions occupied by all pixel blocks with dense repetitive textures is determined as the first image region occupied by dense repetitive textures in the target image.
  • this embodiment first obtains the target image; then, for the image blocks in the target image, identify the distribution state of the gradient direction of each pixel in the image block on the chromaticity, if there is a concentration of pixel points in the distribution state If there is a gradient direction interval of , it is determined that there is a dense repetitive texture in the image block, and the first image area occupied by the dense repetitive texture in the target image is determined based on the image block, which can avoid the influence of image noise and brightness, and effectively identify the existence of dense repetitive texture in the image Textured regions, improving the robustness of image recognition in densely repetitive textured regions.
  • the process of determining the distribution state of the gradient direction of each pixel in the image block on the chromaticity specifically includes:
  • the chromaticity parameter of a pixel point may be a color pixel value of the image block, or may be a grayscale pixel value of the image block.
  • S2023 Calculate the gradient direction of each pixel in the image block according to the gradient in the first direction and the gradient in the second direction, so as to obtain the distribution state of the gradient direction of each pixel in chromaticity.
  • first direction and the second direction of the first direction gradient and the second direction gradient are mutually orthogonal directions.
  • first direction is a horizontal direction
  • second direction is a vertical direction.
  • the first direction gradient of the pixel point Gx(m,n) G(m+1,n)-G(m,n), where G(m+1,n) is the pixel point (m,n) Pixels that are adjacent in the first direction.
  • the second direction gradient Gy(m,n) G(m,n+1)–G(m,n) of the pixel point, where G(m,n+1) is the pixel point (m,n) at the adjacent pixels in two directions.
  • angle(m,n) is the gradient direction of the pixel point (m,n), and the value range of the gradient direction angle(m,n) is [0,360), the unit is degree;
  • Gx(m,n) is The first direction gradient value of the pixel point (m, n);
  • Gy(m, n) is the second direction gradient value of the pixel point (m, n).
  • the gradient in the first direction and the gradient in the second direction of each pixel in the image block are calculated through the chromaticity parameters of the pixels, and the gradient of each pixel in the image block is calculated according to the gradient in the first direction and the gradient in the second direction.
  • the gradient direction of the point and then obtain the distribution state of the gradient direction of each pixel point on the chromaticity, which can realize the distribution state of the gradient direction only according to the chromaticity parameter, and improve the processing efficiency in the image recognition process.
  • step S2021 it may also include:
  • the target image is converted to grayscale.
  • the target image is detected to determine whether the target image is a color image. If the image is a color image, the target image is converted to a grayscale image, and if the target image is a grayscale image, no processing is performed.
  • the gradient direction of each pixel in the image block is calculated by directly using the chromaticity parameters of the grayscale image.
  • the process of determining that there is dense repetitive texture in the image block specifically includes:
  • the ratio of the sum of the number of pixels in the first gradient direction interval and the number of pixels in the second gradient direction interval in the distribution state to the total pixels in the image block exceeds the preset threshold, it is determined that there is a dense density in the image block.
  • the texture is repeated; wherein, the direction difference between the first gradient direction interval and the second gradient direction interval is 180 degrees.
  • a process of dividing the gradient direction interval is also included: dividing the value range of the gradient direction into a plurality of gradient direction intervals.
  • the value range of the gradient direction is [0, 360), and the unit is degree.
  • the value range [0,360) can be divided into multiple gradient direction intervals.
  • the value range [0,360) is divided into 8 gradient direction intervals including: [0,45), [45,90), [90,135), [135,180), [180,225), [225,270), [270,315), [ 315,360), in degrees.
  • the aforementioned preset threshold may be 70%.
  • the preset threshold is set to 70%, which can ensure the recognition accuracy and reduce the recognition error.
  • FIG. 3 is a histogram of gradient direction distribution of a certain pixel block provided by an embodiment of the present disclosure.
  • the gradient directions in this pixel block are mainly distributed in the two gradient direction intervals of [90,135) and [270,315), and the direction difference between these two gradient direction intervals is 180 degrees, which means that the image block is Densely repeating textured regions.
  • FIG. 4 is a histogram of gradient direction distribution of another pixel block provided by an embodiment of the present disclosure. As shown in Figure 4, the distribution of gradient directions in this pixel block is relatively scattered and not concentrated in a specific gradient direction interval, which means that the image block has no texture in a specific direction, and it means that the image block is not a densely repetitive texture area.
  • step S203 after determining the first image area occupied by densely repeated textures in the target image based on the image block, further includes:
  • S204 Determine a second image area occupied by densely repeated textures in the target image according to the variance of the pixel points of the target image and the variance of the pixel points of the blurred target image.
  • Gaussian blur processing is performed on the target image.
  • the image For a flat region of the image (that is, there is no dense repetitive texture region), the image is still flat before and after Gaussian blur.
  • the graphics are not flat before Gaussian blur, and the image becomes flat after Gaussian blur. Therefore, according to whether the flat area of the image changes before and after Gaussian blur, the image area with dense repeated texture in the detection image can be determined.
  • an image filter may be used to perform Gaussian blur processing on the target image.
  • step S204 according to the variance of the pixel points of the target image and the variance of the pixel points of the blurred target image, determine the second image area occupied by densely repeated textures in the target image, specifically including:
  • S2041 Determine the local variance of the first pixel value of each pixel in the target image.
  • the variance value of the value that is, the local variance of the first pixel value is obtained.
  • S2042 Perform Gaussian blur processing on the target image, and determine a second local variance of pixel values of each pixel in the blurred target image.
  • the calculation of the local variance of the second pixel value of each pixel point is consistent with the specific process of calculating the local variance of the first pixel value of each pixel point in the above step S2041, which will not be repeated here.
  • S2043 Determine the area occupied by all pixels whose absolute value of the difference between the local variance of the first pixel value and the local variance of the second pixel value in the target image is greater than the set limit value as the second area occupied by densely repeated textures in the target image image area.
  • the set limit can be 5.
  • S205 Determine the overlapping area of the first image area and the second image area as the final image area occupied by densely repeated textures in the target image.
  • FIG. 5 is a schematic diagram of the first image area occupied by densely repeated textures in the target image.
  • FIG. 6 is a schematic diagram of the second image area occupied by densely repeated textures in the target image.
  • the overlapping area in the target image belonging to the first image area and the second image area with dense repetitive texture at the same time is determined as the final dense repetitive texture area.
  • FIG. 7 is a schematic diagram of the final image area occupied by densely repeated textures in the target image determined according to FIG. 5 and FIG. 6 .
  • the target image includes multiple frames of images; the method includes: performing alignment and fusion on the multiple frames of images, wherein densely repeated texture areas in the multiple frames of images are processed by the first alignment algorithm, and there is no Dense repetitive texture regions are processed using a second alignment algorithm, wherein the first alignment algorithm is different from the second alignment algorithm.
  • the first alignment algorithm includes a global alignment algorithm
  • the second alignment algorithm includes an optical flow alignment algorithm
  • the global alignment algorithm is to match the feature points of two adjacent frames of images and calculate a global mapping matrix through the feature points to achieve image alignment. It is a way to align the entire image without distorting the local image.
  • the optical flow alignment algorithm has a better alignment effect on the local motion of the image, but its alignment effect in areas with densely repeated textures is not as good as that of the global alignment algorithm.
  • image distortion is prone to occur .
  • FIG. 8 is a structural block diagram of an image processing apparatus provided in an embodiment of the present disclosure.
  • the device includes: an acquisition module 301 , a processing module 302 and a first identification module 303 .
  • An acquisition module 301 configured to acquire a target image
  • a processing module 302 configured to, for the image block in the target image, determine the distribution state of the gradient direction of each pixel in the image block on the chromaticity;
  • the first identification module 303 is configured to determine that there is a densely repeated texture in the image block if there is a gradient direction interval in which the number of pixels is concentrated in the distribution state, and determine the densely repeated texture in the target image based on the image block The first image area occupied by the texture.
  • the processing module 302 is specifically configured to obtain the chromaticity parameters of each pixel in the image block; based on the chromaticity parameters of each pixel in the image block, calculate the The first directional gradient and the second directional gradient of each pixel in the image block; according to the first directional gradient and the second directional gradient, calculate the gradient direction of each pixel in the image block to obtain each pixel in The distribution state of the gradient direction on the chromaticity.
  • the processing module 302 is further specifically configured to perform grayscale conversion processing on the target image if the target image is a color image.
  • the first identification module 303 is specifically configured to: if the number of pixels located in the first gradient direction interval and the number of pixels located in the second gradient direction interval in the distribution state If the ratio of the sum of the numbers to the total pixels in the image block exceeds a preset threshold, it is determined that there is a dense repetitive texture in the image block; wherein, the angle between the first gradient direction interval and the second gradient direction interval The difference is 180 degrees.
  • the device further includes: a second recognition module 304 configured to use the variance of the pixel points of the target image and the pixel points of the blurred target image Variance, determining the second image area occupied by densely repeated textures in the target image; determining the overlapping area of the first image area and the second image area as the final image occupied by densely repeated textures in the target image area.
  • a second recognition module 304 configured to use the variance of the pixel points of the target image and the pixel points of the blurred target image Variance, determining the second image area occupied by densely repeated textures in the target image; determining the overlapping area of the first image area and the second image area as the final image occupied by densely repeated textures in the target image area.
  • the second identification module 304 is specifically configured to determine a first local variance of pixel values of each pixel in the target image
  • Determining the area occupied by all pixels whose absolute value of the difference between the local variance of the first pixel value and the local variance of the second pixel value in the target image is greater than a set limit value is determined as densely packed in the target image The second image area occupied by the repeating texture.
  • the target image includes multiple frames of images; the device further includes: an image fusion module 305, configured to perform alignment and fusion on the multiple frames of images, wherein the Regions with densely repeated textures in multiple frames of images are processed using a first alignment algorithm, and regions without densely repeated textures are processed using a second alignment algorithm, wherein the first alignment algorithm is different from the second alignment algorithm.
  • an image fusion module 305 configured to perform alignment and fusion on the multiple frames of images, wherein the Regions with densely repeated textures in multiple frames of images are processed using a first alignment algorithm, and regions without densely repeated textures are processed using a second alignment algorithm, wherein the first alignment algorithm is different from the second alignment algorithm.
  • the device provided in this embodiment can be used to implement the technical solution of the above method embodiment, and its implementation principle and technical effect are similar, so this embodiment will not repeat them here.
  • the embodiments of the present disclosure further provide an electronic device.
  • the electronic device 400 may be a terminal device or a server.
  • the terminal equipment may include but not limited to mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA for short), tablet computers (Portable Android Device, PAD for short), portable multimedia players (Portable Media Player, referred to as PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital TVs, desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP portable multimedia players
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 9 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 400 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 401, which may be stored in a read-only memory (Read Only Memory, referred to as ROM) 402 or from a storage device. 408 is loaded into the program in the random access memory (Random Access Memory, referred to as RAM) 403 to execute various appropriate actions and processes. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored.
  • the processing device 401, the ROM 402, and the RAM 403 are connected to each other through a bus 404.
  • An input/output (I/O) interface 405 is also connected to bus 404 .
  • an input device 406 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; ), a speaker, a vibrator, etc.
  • a storage device 408 including, for example, a magnetic tape, a hard disk, etc.
  • the communication means 409 may allow the electronic device 400 to perform wireless or wired communication with other devices to exchange data. While FIG. 9 shows electronic device 400 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from a network via communication means 409, or from storage means 408, or from ROM 402.
  • the processing device 401 When the computer program is executed by the processing device 401, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof.
  • Computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programming read-only memory (Erasable Programmable ROM, referred to as EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc ROM, referred to as CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • the program code contained on the computer readable medium can be transmitted by any appropriate medium, including but not limited to: electric wire, optical cable, radio frequency (Radio Frequency, RF for short), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is made to execute the methods shown in the above-mentioned embodiments.
  • Computer program code for carrying out the operations of the present disclosure can be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or it can be connected to an external A computer (connected via the Internet, eg, using an Internet service provider).
  • LAN Local Area Network
  • WAN Wide Area Network
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit for obtaining at least two Internet Protocol addresses".
  • exemplary types of hardware logic components include: Field Programmable Gate Array (Field Programmable Gate Array, FPGA for short), Application Specific Integrated Circuit (ASIC for short), Application Specific Standard Products ( Application Specific Standard Product (ASSP for short), System on Chip (SOC for short), Complex Programmable Logic Device (CPLD for short), etc.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disc read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disc read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • an image processing method including:
  • the determining the distribution state of the gradient direction of each pixel in the image block on the chromaticity includes:
  • the image block in the target image before acquiring the chromaticity parameters of each pixel in the image block, further includes:
  • the target image is a color image
  • a grayscale conversion process is performed on the target image.
  • determining that there is dense repetitive texture in the image block includes:
  • the angle difference between the first gradient direction interval and the second gradient direction interval is 180 degrees.
  • the method further includes:
  • the second image occupied by densely repeated textures in the target image is determined area, including:
  • Determining the area occupied by all pixels whose absolute value of the difference between the local variance of the first pixel value and the local variance of the second pixel value in the target image is greater than a set limit value is determined as densely packed in the target image The second image area occupied by the repeating texture.
  • the target image includes a multi-frame image
  • the methods include:
  • Aligning and fusing the multi-frame images where there is a dense repetitive texture area in the multi-frame image is processed by a first alignment algorithm, and there is no dense repetitive texture area is processed by a second alignment algorithm, wherein the first alignment algorithm and The second alignment algorithm is different.
  • an image processing device including:
  • An acquisition module configured to acquire a target image
  • a processing module configured to, for the image block in the target image, determine the distribution state of the gradient direction of each pixel in the image block on the chromaticity;
  • the first identification module is configured to determine that there is a dense repetitive texture in the image block if there is a gradient direction interval where the number of pixels is concentrated in the distribution state, and determine the dense repetitive texture in the target image based on the image block Occupies the first image area.
  • the processing module is specifically configured to obtain the chromaticity parameters of each pixel in the image block; based on the chromaticity parameters of each pixel in the image block, calculate the The first directional gradient and the second directional gradient of each pixel in the image block; according to the first directional gradient and the second directional gradient, calculate the gradient direction of each pixel in the image block to obtain each pixel in the color The distribution state of the gradient direction on the degree.
  • the processing module is further specifically configured to perform grayscale conversion processing on the target image if the target image is a color image.
  • the first identification module is specifically configured to: if the number of pixels located in the first gradient direction interval and the number of pixels located in the second gradient direction interval in the distribution state If the ratio of the sum to the total pixels in the image block exceeds a preset threshold, it is determined that there is a dense repetitive texture in the image block; wherein, the angle difference between the first gradient direction interval and the second gradient direction interval is 180 degrees.
  • the device further includes: a second identification module, configured to determine the A second image area occupied by densely repeated textures in the target image; determining an overlapping area of the first image area and the second image area as a final image area occupied by densely repeated textures in the target image.
  • the second identification module is specifically configured to determine the first local variance of pixel values of each pixel in the target image; perform Gaussian blur processing on the target image, and Determining the second pixel value local variance of each pixel in the blurred target image; calculating the absolute value of the difference between the first pixel value local variance and the second pixel value local variance in the target image
  • the area occupied by all pixels larger than the set limit value is determined as the second image area occupied by the densely repeated texture in the target image.
  • the target image includes multiple frames of images; the device further includes: an image fusion module, configured to perform alignment and fusion on the multiple frames of images, wherein there are The densely repeated texture area is processed by the first alignment algorithm, and the densely repeated texture area is processed by the second alignment algorithm, wherein the first alignment algorithm is different from the second alignment algorithm.
  • an image fusion module configured to perform alignment and fusion on the multiple frames of images, wherein there are The densely repeated texture area is processed by the first alignment algorithm, and the densely repeated texture area is processed by the second alignment algorithm, wherein the first alignment algorithm is different from the second alignment algorithm.
  • an electronic device including: at least one processor and a memory;
  • the memory stores computer-executable instructions
  • the at least one processor executes the computer-executed instructions stored in the memory, so that the at least one processor executes the image processing method described in the above first aspect and various possible embodiments of the first aspect.
  • a computer-readable storage medium stores computer-executable instructions, and when a processor executes the computer-executable instructions, The image processing method described in the above first aspect and various possible embodiments of the first aspect is realized.
  • an embodiment of the present disclosure provides a computer program product, including a computer program.
  • the computer program is executed by a processor, the image processing method described in the above first aspect and various possible embodiments of the first aspect is implemented. .
  • an embodiment of the present disclosure provides a computer program.
  • the computer program is executed by a processor, the image processing method described in the above first aspect and various possible embodiments of the first aspect is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法、装置及设备,方法包括:获取目标图像;针对目标图像中的图像块,确定图像块中各像素点在色度上的梯度方向的分布状态;若在分布状态中存在像素点数量集中的梯度方向区间,确定图像块中存在密集重复纹理,并基于图像块确定目标图像中密集重复纹理占据的第一图像区域。该方法能够有效识别出图像中存在密集重复纹理的区域,提升密集重复纹理区域的图像识别的鲁棒性。

Description

图像处理方法、装置及设备
本申请要求于2021年05月11日提交中国专利局、申请号为202110512765.3、申请名称为“图像处理方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、设备、存储介质、计算机程序产品及计算机程序。
背景技术
随着计算机视觉处理技术的不断发展,目标检测技术逐渐成为研究和应用的热点。目标检测(Object Detection)技术,也称为目标提取技术,是通过图像识别技术识别出图像中具有一定形状或姿态的目标对象。密集重复纹理是图像中某个区域内按照一定规律重复出现的纹理(例如条纹、千鸟纹、斑马纹等),通过识别图像中的密集重复纹理对目标检测过程中识别图像中某个特定的物体具有重要意义。
目前,现有的识别图像中的密集重复纹理,通常是采用基于直线的检测方式,即当在图像的某个区域中识别出大量方向一致的直线时,则确定该区域为图像中密集重复纹理占据的区域。
然而,基于直线的检测方式,对图像的噪声和亮度的均较高要求。当图像噪声较大或者图像的亮度较低时,会导致无法准确识别出图像中的直线,进而不能有效识别出图像中密集重复纹理占据的区域,影响识别的鲁棒性。
发明内容
本公开实施例提供一种图像处理方法、装置及设备、存储介质、计算机程序产品及计算机程序,能够克服现有技术中当图像噪声较大或者图像的亮度较低时,会导致检测结果容易出现误差,不能有效识别出图像中存在密集重复纹理的区域,影响检测的鲁棒性的问题。
第一方面,本公开实施例提供一种图像处理方法,包括:
获取目标图像;
针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布 状态;
若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
第二方面,本公开实施例提供一种图像处理设备,包括:
获取模块,用于获取目标图像;
处理模块,用于针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布状态;
第一识别模块,用于若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
第三方面,本公开实施例提供一种电子设备,包括:处理器和存储器;
所述存储器存储计算机执行指令;
所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面所述的图像处理方法。
第四方面,本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如上第一方面所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,述计算机程序被处理器执行时,实现如上第一方面所述的图像处理方法。
本实施例提供的图像处理方法、装置设备、存储介质、计算机程序产品及计算机程序,该方法,通过获取目标图像;然后对于目标图像中的图像块,识别图像块中各像素点在色度上的梯度方向的分布状态,若在分布状态中存在像素点数量集中的梯度方向区间,则确定图像块中存在密集重复纹理,并基于图像块确定目标图像中密集重复纹理占据的第一图像区域,能够避免图像噪声和亮度的影响,有效识别出图像中存在密集重复纹理的区域,提升密集重复纹理区域的图像识别的鲁棒性。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些 实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开实施例提供的一种图像处理方法的场景示意图;
图2为本公开实施例提供的图像处理方法流程示意图;
图3为本公开实施例提供的某个像素块的梯度方向分布直方图;
图4为本公开实施例提供的另一个像素块的梯度方向分布直方图;
图5为目标图像中密集重复纹理占据的第一图像区域的示意图;
图6为目标图像中密集重复纹理占据的第二图像区域的示意图;
图7为根据图5和图6确定的目标图像中密集重复纹理占据的最终图像区域的示意图;
图8为本公开实施例提供的图像处理装置的结构框图;
图9为本公开实施例提供的电子设备的硬件结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
目前,对图像中密集重复纹理区域的检测,主要包括两种实现方式:第一,采用深度学习,通过大量的图像数据训练出学习模型(例如神经网络模型),然后采用训练好的模型预测图像中是否存在密集重复纹理,以及密集重复纹理区域在图像中的位置。第二,采用传统的直线的检测方式,即当在图像的局部区域中检测出大量方向一致的直线时,则确定图像中存在密集重复纹理。
但是,采用深度学习方式,需要获取大量、高质量训练数据进行训练才能训练出较好的模型,存在成本高、效率低的问题。而采用基于直线的检测方式,当图像噪声较大或者图像的亮度较低时,会导致检测结果容易出现误差,不能有效识别出图像中存在密集重复纹理的区域,存在稳定性差的问题。
为了解决上述技术问题,本公开实施例提供了一种图像处理方法,通过将图像划分为各个图像块,并确定各图像块中各像素点在色度上的梯度方向的分布状态,若在分布状态中存在像素点数量集中的梯度方向区间,确定该图像块中存在密集重复纹理,进而可以确定图像中密集重复纹理占据的图像区域。本图像处理方法,既不需要复杂训练模型的过程,提高了处理效率;又能够避免图像噪声和亮度的影响,提升了密集重复纹理区域的图像检测的鲁棒 性。
参考图1,图1为本公开实施例提供的一种图像处理方法的场景示意图。如图1所示,本实施例提供的系统包括终端101和服务端102。其中,终端101可以是手机、平板电脑和个人电脑等设备。本实施例对终端101的实现方式不做特别限制,只要该终端101能够与服务端102进行数据或信息输入输出交互即可。服务端102可以是由一台服务器或几台服务器组成的集群。
参考图2,图2为本公开实施例提供的图像处理方法流程示意图。本实施例的方法可以应用在图1所示的终端,也可以应用在图1所示的服务端中,对此本公开不作任何的限定,该图像处理方法包括:
S201:获取目标图像。
可选地,该目标图像可以是终端拍摄的图像,也可以是终端接收的由其他设备发送的图像。目标图像还可以是服务端接收的由终端或其他设备发送的图像。
例如,目标图像可以是百叶窗的图像,也可以是斑马线人行横道的图像等。
S202:针对目标图像中的图像块,确定图像块中各像素点在色度上的梯度方向的分布状态。
具体地,可以将目标图像划分为M×N个矩形的图像块,其中M和N均为正整数。
可选地,目标图像中各个图像块可以是相互独立的,也可以是相互重叠的。
当图像块之间相互重叠时,图像块的重叠比例可以按照实际的计算和效果情况进行调整。可以理解,图像块重叠的比例越大,图像块的数量越多,则终端或服务端的计算消耗越大,识别的精度也越高。图像块重叠的比例越小,图像块的数量越少,则终端或服务端的计算消耗越小,识别的精度也越低。
在本公开实施例中,可以根据图像块中各像素点的色度参数计算各像素点在色度上的梯度方向的分布状态。这里,梯度方向也称为梯度角度。
S203:若在分布状态中存在像素点数量集中的梯度方向区间,确定图像块中存在密集重复纹理,并基于图像块确定目标图像中密集重复纹理占据的第一图像区域。
其中,可以根据各像素点的梯度方向分布信息,判断是否存在两个梯度方向区间集中了该图像块中大部分的像素点,且这两个梯度方向区间的角度是平行的、且相差180度。如果是,则将这个像素块确定为存在密集重复纹理的图像块。
具体地,将所有存在密集重复纹理的像素块所占据的图像区域的集合确定为目标图像中密集重复纹理占据的第一图像区域。
从上述描述可知,本实施例首先获取目标图像;然后对于目标图像中的图像块,识别图 像块中各像素点在色度上的梯度方向的分布状态,若在分布状态中存在像素点数量集中的梯度方向区间,则确定图像块中存在密集重复纹理,并基于图像块确定目标图像中密集重复纹理占据的第一图像区域,能够避免图像噪声和亮度的影响,有效识别出图像中存在密集重复纹理的区域,提升密集重复纹理区域的图像识别的鲁棒性。
在本公开的一个实施例中,上述步骤S202中,确定图像块中各像素点在色度上的梯度方向的分布状态的过程,具体包括:
S2021:获取图像块中各像素点的色度参数。
在本公开实施例中,像素点的色度参数可以是图像块的彩色像素值,也可以是图像块的灰度像素值。
S2022:基于图像块中各像素点的色度参数,计算图像块中各像素点的第一方向梯度和第二方向梯度。
S2023:根据第一方向梯度和第二方向梯度,计算图像块中各像素点的梯度方向,以得到各像素点在色度上的梯度方向的分布状态。
其中,第一方向梯度和第二方向梯度的第一方向和第二方向为彼此正交方向。例如,第一方向为水平方向,第二方向为垂直方向。
在本公开实施例中,设图像块中任一像素点坐标为(m,n),G(m,n)表示坐标为(m,n)的像素点的像素值。
那么,该像素点的第一方向梯度Gx(m,n)=G(m+1,n)-G(m,n),其中G(m+1,n)为像素点(m,n)在第一方向上相邻的像素点。
该像素点的第二方向梯度Gy(m,n)=G(m,n+1)–G(m,n),其中G(m,n+1)为像素点(m,n)在第二方向上相邻的像素点。
综上,上述像素点(m,n)的梯度方向angle(m,n)为:
Figure PCTCN2022092081-appb-000001
式中,angle(m,n)为像素点(m,n)的梯度方向,其中梯度方向angle(m,n)的取值范围为[0,360),单位为度;Gx(m,n)为像素点(m,n)的第一方向梯度值;Gy(m,n)为像素点(m,n)的第二方向梯度值。
从上述描述可知,通过像素点的色度参数,计算图像块中各像素点的第一方向梯度和第二方向梯度,并根据第一方向梯度和第二方向梯度,计算得到图像块中各像素点的梯度方向,进而得到各像素点在色度上的梯度方向的分布状态,能够实现仅根据色度参数即可确定梯度方向的分布状态,提高图像识别过程中处理效率。
在本公开的一个实施例中,在步骤S2021之前,还可以包括:
若目标图像为彩色图,则对目标图像进行灰度图转化处理。
在本公开实施例中,获取目标图像后,对目标图像进行检测以确定目标图像是否为彩色图。若图像为彩色图,则将目标图像进行灰度图转化处理,若目标图像为灰度图,则不进行处理。通过将目标图像转化为灰度图,直接利用灰度图的色度参数,计算图像块中各像素点的梯度方向。
从上述描述可知,由于计算图像块中各像素点的梯度方向不需要计算目标图像的彩色图的色度参数,减少了颜色信息的识别处理,降低了计算量,进一步提高了图像处理的效率。
在本公开的一个实施例中,上述步骤S203中,若在分布状态中存在像素点数量集中的梯度方向区间,确定图像块中存在密集重复纹理的过程,具体包括:
若分布状态中位于第一梯度方向区间的像素点个数和位于第二梯度方向区间的像素点个数之和占图像块内总像素点的比例超过预设阈值,则确定图像块中存在密集重复纹理;其中,第一梯度方向区间和第二梯度方向区间的方向差为180度。
在本公开实施例中,在上述步骤之前还包括划分梯度方向区间的过程:将梯度方向的取值范围划分为多个梯度方向区间。
在本公开实施例中,将梯度方向的取值范围[0,360),单位为度。可以将取值范围[0,360)划分为多个梯度方向区间。例如,取值范围[0,360)划分为8个梯度方向区间包括:[0,45)、[45,90)、[90,135)、[135,180)、[180,225)、[225,270)、[270,315)、[315,360),单位为度。
可选地,上述预设阈值可以是70%。本公开实施例中,将预设阈值设置为70%,能够保证识别的精度,降低识别误差。
参考图3,图3为本公开实施例提供的某个像素块的梯度方向分布直方图。如图3所示,该像素块中的梯度方向主要分布在[90,135)和[270,315)这两个梯度方向区间,且这两个梯度方向区间的方向差为180度,则说明该图像块为密集重复纹理区域。
参考图4,图4为本公开实施例提供的另一个像素块的梯度方向分布直方图。如图4所示,该像素块中的梯度方向分布较为分散,没有集中在特定的梯度方向区间内,则说明该图像块没有特定方向的纹理,则说明该图像块不是密集重复纹理区域。
从上述描述可知,通过确定图像块内像素点的梯度方向位于每个梯度方向区间的像素个数,若存在像素点个数集中的梯度方向的两个区间,且该两个区间相差为180度,则确定图像块为密集重复纹理的图像块,能够快速、高效的定位存在密集重复纹理的图像块,进而从整体上提高检测目标图像的密集重复纹理区域的效率。
在本公开的一个实施例中,在上述实施例的基础上,在步骤S203之后,基于所述图像块 确定所述目标图像中密集重复纹理占据的第一图像区域之后,还包括:
S204:根据目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定目标图像中密集重复纹理占据的第二图像区域。
在本公开实施例中,对目标图像进行高斯模糊处理,对于图像平坦的区域(即不存在密集重复纹理区域),高斯模糊前后,图像还是平坦的。对于图像存在密集重复纹理的区域,高斯模糊之前图形是不平坦的,高斯模糊之后,图像变平坦。因此,根据高斯模糊前后图像平坦区域是否发生变化,可以确定检测图像中存在密集重复纹理的图像区域。
在本公开实施例中,可以采用图像滤波器对目标图像进行高斯模糊处理。
具体地,步骤S204,根据目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定目标图像中密集重复纹理占据的第二图像区域,具体包括:
S2041:确定目标图像的中各像素点的第一像素值局部方差。
其中,对于目标图像中的每个像素点,取以该像素点为中心的设定大小(如3×3的像素块)的所有像素点的像素块,计算该像素块内所有像素点的像素值的方差值,即得到第一像素值局部方差。
S2042:对目标图像进行高斯模糊处理,并确定模糊处理后的目标图像中各像素点的第二像素值局部方差。
这里,计算各像素点的第二像素值局部方差,与上述步骤S2041中,计算各像素点的第一像素值局部方差的具体过程一致,这里不再赘述。
S2043:将目标图像中第一像素值局部方差与第二像素值局部方差的差值的绝对值大于设定限值的所有像素点占据的区域,确定为目标图像中密集重复纹理占据的第二图像区域。
可选地,设定限值可以是5。
S205:将第一图像区域和第二图像区域的重合区域,确定为目标图像中密集重复纹理占据的最终图像区域。
参考图5,图5为目标图像中密集重复纹理占据的第一图像区域的示意图。参考图6,图6为目标图像中密集重复纹理占据的第二图像区域的示意图。根据图5和图6,将目标图像中同时属于存在密集重复纹理的第一图像区域和第二图像区域的重合区域,确定为最终存在密集重复纹理区域。参考图7,图7为根据图5和图6确定的目标图像中密集重复纹理占据的最终图像区域的示意图。
从上述描述可知,通过对目标图像模糊处理,识别目标图像中密集重复纹理占据的第二图像区域,根据第二图像区域对密集重复纹理占据的第一图像区域缩小区域范围,保证目标图像中密集重复纹理占据图像区域划分更精准,提高密集重复纹理的区域识别的精度。
在本发明的一个实施例中,所述目标图像包括多帧图像;所述方法包括:对多帧图像进行对齐融合,其中多帧图像中存在密集重复纹理区域采用第一对齐算法处理,不存在密集重复纹理区域采用第二对齐算法处理,其中第一对齐算法与第二对齐算法不同。
在本公开实施例中,第一对齐算法包括全局对齐算法,第二对齐算法包括光流对齐算法。
其中,全局对齐算法,是通过匹配相邻两帧图像的特征点,通过特征点计算一个全局映射矩阵以达到图像对齐的,是一种对图像整体对齐的方式,不会对图像局部造成扭曲。
其中,光流对齐算法,对图像的局部运动的对齐效果较好,其在存在密集重复纹理区域的对齐效果不如全局对齐算法,在存在密集重复纹理区域进行对齐处理时,容易出现图像的扭曲失真。
从上述描述可知,根据是否存在密集纹理区域,采用不同的对齐算法融合图像,使得融合后的图像不会出现扭曲失真。
对应于上文实施例的图像处理方法,图8为本公开实施例提供的图像处理装置的结构框图。为了便于说明,仅示出了与本公开实施例相关的部分。参照图8,所述装置包括:获取模块301、处理模块302和第一识别模块303。
获取模块301,用于获取目标图像;
处理模块302,用于针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布状态;
第一识别模块303,用于若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
根据本公开的一个或多个实施例,所述处理模块302,具体用于获取所述图像块中各像素点的色度参数;基于所述图像块中各像素点的色度参数,计算所述图像块中各像素点的第一方向梯度和第二方向梯度;根据所述第一方向梯度和第二方向梯度,计算所述图像块中各像素点的梯度方向,以得到各像素点在色度上的梯度方向的分布状态。
根据本公开的一个或多个实施例,所述处理模块302,还具体用于若所述目标图像为彩色图,则对所述目标图像进行灰度图转化处理。
根据本公开的一个或多个实施例,所述第一识别模块303,具体用于若所述分布状态中位于第一梯度方向区间的像素点个数和位于第二梯度方向区间的像素点个数之和占所述图像块内总像素点的比例超过预设阈值,则确定所述图像块中存在密集重复纹理;其中,所述第一梯度方向区间和所述第二梯度方向区间的角度差为180度。
参考图8,根据本公开的一个或多个实施例,所述装置还包括:第二识别模块304,用于 根据所述目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定所述目标图像中密集重复纹理占据的第二图像区域;将所述第一图像区域和所述第二图像区域的重合区域,确定为所述目标图像中密集重复纹理占据的最终图像区域。
根据本公开的一个或多个实施例,所述第二识别模块304,具体用于确定所述目标图像的中各像素点的第一像素值局部方差;
对所述目标图像进行高斯模糊处理,并确定所述模糊处理后的目标图像中各像素点的第二像素值局部方差;
将所述目标图像中所述第一像素值局部方差与所述第二像素值局部方差的差值的绝对值大于设定限值的所有像素点占据的区域,确定为所述目标图像中密集重复纹理占据的第二图像区域。
参考图8,根据本公开的一个或多个实施例,所述目标图像包括多帧图像;所述装置还包括:图像融合模块305,用于对所述多帧图像进行对齐融合,其中所述多帧图像中存在密集重复纹理区域采用第一对齐算法处理,不存在密集重复纹理区域采用第二对齐算法处理,其中所述第一对齐算法与所述第二对齐算法不同。
本实施例提供的设备,可用于执行上述方法实施例的技术方案,其实现原理和技术效果类似,本实施例此处不再赘述。
为了实现上述实施例,本公开实施例还提供了一种电子设备。
参考图9,其示出了适于用来实现本公开实施例的电子设备400的结构示意图,该电子设备400可以为终端设备或服务器。其中,终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,简称PDA)、平板电脑(Portable Android Device,简称PAD)、便携式多媒体播放器(Portable Media Player,简称PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图9示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,电子设备400可以包括处理装置(例如中央处理器、图形处理器等)401,其可以根据存储在只读存储器(Read Only Memory,简称ROM)402中的程序或者从存储装置408加载到随机访问存储器(Random Access Memory,简称RAM)403中的程序而执行各种适当的动作和处理。在RAM 403中,还存储有电子设备400操作所需的各种程序和数据。处理装置401、ROM 402以及RAM 403通过总线404彼此相连。输入/输出(I/O)接口405也连接至总线404。
通常,以下装置可以连接至I/O接口405:包括例如触摸屏、触摸板、键盘、鼠标、摄像 头、麦克风、加速度计、陀螺仪等的输入装置406;包括例如液晶显示器(Liquid Crystal Display,简称LCD)、扬声器、振动器等的输出装置407;包括例如磁带、硬盘等的存储装置408;以及通信装置409。通信装置409可以允许电子设备400与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备400,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置409从网络上被下载和安装,或者从存储装置408被安装,或者从ROM 402被安装。在该计算机程序被处理装置401执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable ROM,简称EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc ROM,简称CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,简称RF)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行上述实施例所示的方法。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,简称LAN)或广域网(Wide Area Network,简称WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,简称FPGA)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、专用标准产品(Application Specific Standard Product,简称ASSP)、片上系统(System on Chip,简称SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,简称CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、 或上述内容的任何合适组合。
第一方面,根据本公开的一个或多个实施例,提供了一种图像处理方法,包括:
获取目标图像;
针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布状态;
若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
根据本公开的一个或多个实施例,所述确定所述图像块中各像素点在色度上的梯度方向的分布状态,包括:
获取所述图像块中各像素点的色度参数;
基于所述图像块中各像素点的色度参数,计算所述图像块中各像素点的第一方向梯度和第二方向梯度;
根据所述第一方向梯度和第二方向梯度,计算所述图像块中各像素点的梯度方向,以得到各像素点在色度上的梯度方向的分布状态。
根据本公开的一个或多个实施例,所述针对所述目标图像中的图像块,获取所述图像块中各像素点的色度参数之前,还包括:
若所述目标图像为彩色图,则对所述目标图像进行灰度图转化处理。
根据本公开的一个或多个实施例,所述若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,包括:
若所述分布状态中位于第一梯度方向区间的像素点个数和位于第二梯度方向区间的像素点个数之和占所述图像块内总像素点的比例超过预设阈值,则确定所述图像块中存在密集重复纹理;
其中,所述第一梯度方向区间和所述第二梯度方向区间的角度差为180度。
根据本公开的一个或多个实施例,所述基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域之后,还包括:
根据所述目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定所述目标图像中密集重复纹理占据的第二图像区域;
将所述第一图像区域和所述第二图像区域的重合区域,确定为所述目标图像中密集重复纹理占据的最终图像区域。
根据本公开的一个或多个实施例,所述根据所述目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定所述目标图像中密集重复纹理占据的第二图像区域,包括:
确定所述目标图像的中各像素点的第一像素值局部方差;
对所述目标图像进行高斯模糊处理,并确定所述模糊处理后的目标图像中各像素点的第二像素值局部方差;
将所述目标图像中所述第一像素值局部方差与所述第二像素值局部方差的差值的绝对值大于设定限值的所有像素点占据的区域,确定为所述目标图像中密集重复纹理占据的第二图像区域。
根据本公开的一个或多个实施例,所述目标图像包括多帧图像;
所述方法包括:
对所述多帧图像进行对齐融合,其中所述多帧图像中存在密集重复纹理区域采用第一对齐算法处理,不存在密集重复纹理区域采用第二对齐算法处理,其中所述第一对齐算法与所述第二对齐算法不同。
第二方面,根据本公开的一个或多个实施例,提供了一种图像处理装置,包括:
获取模块,用于获取目标图像;
处理模块,用于针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布状态;
第一识别模块,用于若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
根据本公开的一个或多个实施例,所述处理模块,具体用于获取所述图像块中各像素点的色度参数;基于所述图像块中各像素点的色度参数,计算所述图像块中各像素点的第一方向梯度和第二方向梯度;根据所述第一方向梯度和第二方向梯度,计算所述图像块中各像素点的梯度方向,以得到各像素点在色度上的梯度方向的分布状态。
根据本公开的一个或多个实施例,所述处理模块,还具体用于若所述目标图像为彩色图,则对所述目标图像进行灰度图转化处理。
根据本公开的一个或多个实施例,所述第一识别模块,具体用于若所述分布状态中位于第一梯度方向区间的像素点个数和位于第二梯度方向区间的像素点个数之和占所述图像块内总像素点的比例超过预设阈值,则确定所述图像块中存在密集重复纹理;其中,所述第一梯度方向区间和所述第二梯度方向区间的角度差为180度。
根据本公开的一个或多个实施例,所述装置还包括:第二识别模块,用于根据所述目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定所述目标图像中密集重复纹理占据的第二图像区域;将所述第一图像区域和所述第二图像区域的重合区域,确定为 所述目标图像中密集重复纹理占据的最终图像区域。
根据本公开的一个或多个实施例,所述第二识别模块,具体用于确定所述目标图像的中各像素点的第一像素值局部方差;对所述目标图像进行高斯模糊处理,并确定所述模糊处理后的目标图像中各像素点的第二像素值局部方差;将所述目标图像中所述第一像素值局部方差与所述第二像素值局部方差的差值的绝对值大于设定限值的所有像素点占据的区域,确定为所述目标图像中密集重复纹理占据的第二图像区域。
根据本公开的一个或多个实施例,所述目标图像包括多帧图像;所述装置还包括:图像融合模块,用于对所述多帧图像进行对齐融合,其中所述多帧图像中存在密集重复纹理区域采用第一对齐算法处理,不存在密集重复纹理区域采用第二对齐算法处理,其中所述第一对齐算法与所述第二对齐算法不同。
第三方面,根据本公开的一个或多个实施例,提供了一种电子设备,包括:至少一个处理器和存储器;
所述存储器存储计算机执行指令;
所述至少一个处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如上第一方面以及第一方面各种可能的实施例所述的图像处理方法。
第四方面,根据本公开的一个或多个实施例,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如上第一方面以及第一方面各种可能的实施例所述的图像处理方法。
第五方面,本公开实施例提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的实施例所述的图像处理方法。
第六方面,本公开实施例提供一种计算机程序,所述计算机程序被处理器执行时,实现如上第一方面以及第一方面各种可能的实施例所述的图像处理方法。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相 反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (12)

  1. 一种图像处理方法,所述方法包括:
    获取目标图像;
    针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布状态;
    若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
  2. 根据权利要求1所述的方法,所述确定所述图像块中各像素点在色度上的梯度方向的分布状态,包括:
    获取所述图像块中各像素点的色度参数;
    基于所述图像块中各像素点的色度参数,计算所述图像块中各像素点的第一方向梯度和第二方向梯度;
    根据所述第一方向梯度和第二方向梯度,计算所述图像块中各像素点的梯度方向,以得到各像素点在色度上的梯度方向的分布状态。
  3. 根据权利要求2所述的方法,所述针对所述目标图像中的图像块,获取所述图像块中各像素点的色度参数之前,还包括:
    若所述目标图像为彩色图,则对所述目标图像进行灰度图转化处理。
  4. 根据权利要求1所述的方法,所述若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,包括:
    若所述分布状态中位于第一梯度方向区间的像素点个数和位于第二梯度方向区间的像素点个数之和占所述图像块内总像素点的比例超过预设阈值,则确定所述图像块中存在密集重复纹理;
    其中,所述第一梯度方向区间和所述第二梯度方向区间的方向差为180度。
  5. 根据权利要求1至3中任一项所述的方法,所述基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域之后,还包括:
    根据所述目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定所述目标图像中密集重复纹理占据的第二图像区域;
    将所述第一图像区域和所述第二图像区域的重合区域,确定为所述目标图像中密集重复纹理占据的最终图像区域。
  6. 根据权利要求5所述的方法,所述根据所述目标图像的像素点的方差和模糊处理后的目标图像的像素点的方差,确定所述目标图像中密集重复纹理占据的第二图像区域,包括:
    确定所述目标图像的中各像素点的第一像素值局部方差;
    对所述目标图像进行高斯模糊处理,并确定所述模糊处理后的目标图像中各像素点的第二像素值局部方差;
    将所述目标图像中所述第一像素值局部方差与所述第二像素值局部方差的差值的绝对值大于设定限值的所有像素点占据的区域,确定为所述目标图像中密集重复纹理占据的第二图像区域。
  7. 根据权利要求1至6中任一项所述的方法,所述目标图像包括多帧图像;
    所述方法包括:
    对所述多帧图像进行对齐融合,其中所述多帧图像中存在密集重复纹理区域采用第一对齐算法处理,不存在密集重复纹理区域采用第二对齐算法处理,其中所述第一对齐算法与所述第二对齐算法不同。
  8. 一种图像处理装置,包括:
    获取模块,用于获取目标图像;
    处理模块,用于针对所述目标图像中的图像块,确定所述图像块中各像素点在色度上的梯度方向的分布状态;
    第一识别模块,用于若在所述分布状态中存在像素点数量集中的梯度方向区间,确定所述图像块中存在密集重复纹理,并基于所述图像块确定所述目标图像中密集重复纹理占据的第一图像区域。
  9. 一种电子设备,包括:至少一个处理器和存储器;
    所述存储器存储计算机执行指令;
    所述处理器执行所述存储器存储的计算机执行指令,使得所述至少一个处理器执行如权利要求1至7中任一项所述的图像处理方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至7中任一项所述的图像处理方法。
  11. 一种计算机程序产品,其特征在于,包括计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的图像处理方法。
  12. 一种计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1至7中任一项所述的图像处理方法。
PCT/CN2022/092081 2021-05-11 2022-05-10 图像处理方法、装置及设备 WO2022237811A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110512765.3A CN115409881A (zh) 2021-05-11 2021-05-11 图像处理方法、装置及设备
CN202110512765.3 2021-05-11

Publications (1)

Publication Number Publication Date
WO2022237811A1 true WO2022237811A1 (zh) 2022-11-17

Family

ID=84027984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/092081 WO2022237811A1 (zh) 2021-05-11 2022-05-10 图像处理方法、装置及设备

Country Status (2)

Country Link
CN (1) CN115409881A (zh)
WO (1) WO2022237811A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797374A (zh) * 2023-02-03 2023-03-14 长春理工大学 基于图像处理的机场跑道提取方法
CN115880362A (zh) * 2022-12-22 2023-03-31 深圳思谋信息科技有限公司 码区定位方法、装置、计算机设备及计算机可读存储介质
CN115965624A (zh) * 2023-03-16 2023-04-14 山东宇驰新材料科技有限公司 一种抗磨液压油污染颗粒检测方法
CN116110053A (zh) * 2023-04-13 2023-05-12 济宁能源发展集团有限公司 基于图像识别的集装箱表面信息检测方法
CN116843689A (zh) * 2023-09-01 2023-10-03 山东众成菌业股份有限公司 一种菌盖表面破损检测方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116137022B (zh) * 2023-04-20 2023-08-22 山东省三河口矿业有限责任公司 一种用于地下采矿远程监控的数据增强方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130193211A1 (en) * 2012-01-26 2013-08-01 Apple Inc. System and method for robust real-time 1d barcode detection
CN104036232A (zh) * 2014-05-15 2014-09-10 浙江理工大学 一种基于图像边缘特征分析的领带花型检索方法
CN105956509A (zh) * 2016-04-26 2016-09-21 昂纳自动化技术(深圳)有限公司 基于聚类算法的一维条码检测的方法及装置
CN107025639A (zh) * 2017-04-05 2017-08-08 中科微至智能制造科技江苏有限公司 一种复杂环境下的条码定位方法
CN107908996A (zh) * 2017-10-25 2018-04-13 福建联迪商用设备有限公司 一种提取一维条码信息的方法及终端
CN111815578A (zh) * 2020-06-23 2020-10-23 浙江大华技术股份有限公司 图像条纹检测方法、视频监控系统及相关装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130193211A1 (en) * 2012-01-26 2013-08-01 Apple Inc. System and method for robust real-time 1d barcode detection
CN104036232A (zh) * 2014-05-15 2014-09-10 浙江理工大学 一种基于图像边缘特征分析的领带花型检索方法
CN105956509A (zh) * 2016-04-26 2016-09-21 昂纳自动化技术(深圳)有限公司 基于聚类算法的一维条码检测的方法及装置
CN107025639A (zh) * 2017-04-05 2017-08-08 中科微至智能制造科技江苏有限公司 一种复杂环境下的条码定位方法
CN107908996A (zh) * 2017-10-25 2018-04-13 福建联迪商用设备有限公司 一种提取一维条码信息的方法及终端
CN111815578A (zh) * 2020-06-23 2020-10-23 浙江大华技术股份有限公司 图像条纹检测方法、视频监控系统及相关装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIE GUANGHAO, LIU SHIXING;ZHU YAN: "Logistic Barcode Location based on Spacing Matching", JOURNAL OF HEFEI UNIVERSITY OF TECHNOLOGY (NATURAL SCIENCE EDITION), CN, vol. 41, no. 10, 31 October 2018 (2018-10-31), CN , pages 1372 - 1376, XP093003247, ISSN: 1003-5060, DOI: 10.3969/j.issn.1003-5060.2018.10.014 *
YU JUNWEI, ET AL.: "Research on Gradient Direction Evaluation-based Barcode Localization Methods", COMPUTER PROGRAMMING SKILLS & MAINTENANCE, INFORMATION INDUSTRY CHAMBER OF COMMERCE, CN, no. 1, 31 December 2018 (2018-12-31), CN, pages 29 - 32, XP093003252, ISSN: 1006-4052, DOI: 10.16184/j.cnki.comprg.2018.01.005 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880362A (zh) * 2022-12-22 2023-03-31 深圳思谋信息科技有限公司 码区定位方法、装置、计算机设备及计算机可读存储介质
CN115880362B (zh) * 2022-12-22 2023-08-08 深圳思谋信息科技有限公司 码区定位方法、装置、计算机设备及计算机可读存储介质
CN115797374A (zh) * 2023-02-03 2023-03-14 长春理工大学 基于图像处理的机场跑道提取方法
CN115965624A (zh) * 2023-03-16 2023-04-14 山东宇驰新材料科技有限公司 一种抗磨液压油污染颗粒检测方法
CN116110053A (zh) * 2023-04-13 2023-05-12 济宁能源发展集团有限公司 基于图像识别的集装箱表面信息检测方法
CN116843689A (zh) * 2023-09-01 2023-10-03 山东众成菌业股份有限公司 一种菌盖表面破损检测方法
CN116843689B (zh) * 2023-09-01 2023-11-21 山东众成菌业股份有限公司 一种菌盖表面破损检测方法

Also Published As

Publication number Publication date
CN115409881A (zh) 2022-11-29

Similar Documents

Publication Publication Date Title
WO2022237811A1 (zh) 图像处理方法、装置及设备
CN111242881B (zh) 显示特效的方法、装置、存储介质及电子设备
CN110322500B (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
EP3910543A2 (en) Method for training object detection model, object detection method and related apparatus
CN111414879B (zh) 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
WO2020228405A1 (zh) 图像处理方法、装置及电子设备
CN110070551B (zh) 视频图像的渲染方法、装置和电子设备
CN110349212B (zh) 即时定位与地图构建的优化方法及装置、介质和电子设备
CN112258512A (zh) 点云分割方法、装置、设备和存储介质
CN110781823B (zh) 录屏检测方法、装置、可读介质及电子设备
CN112232311B (zh) 人脸跟踪方法、装置及电子设备
CN111127603B (zh) 动画生成方法、装置、电子设备及计算机可读存储介质
US11494961B2 (en) Sticker generating method and apparatus, and medium and electronic device
CN114049674A (zh) 一种三维人脸重建方法、装置及存储介质
CN111368668B (zh) 三维手部识别方法、装置、电子设备及存储介质
WO2023040563A1 (zh) 图像处理方法及设备
CN113963000B (zh) 图像分割方法、装置、电子设备及程序产品
CN113642493B (zh) 一种手势识别方法、装置、设备及介质
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质
CN114155545A (zh) 表格识别方法、装置、可读介质及电子设备
CN111784607A (zh) 图像色调映射方法、装置、终端设备及存储介质
WO2022194157A1 (zh) 一种目标跟踪方法、装置、设备及介质
WO2023051362A1 (zh) 图像区域处理方法及设备
CN110189279B (zh) 模型训练方法、装置、电子设备及存储介质
CN113808050B (zh) 3d点云的去噪方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22806774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE