CN115588018A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115588018A
CN115588018A CN202211236456.9A CN202211236456A CN115588018A CN 115588018 A CN115588018 A CN 115588018A CN 202211236456 A CN202211236456 A CN 202211236456A CN 115588018 A CN115588018 A CN 115588018A
Authority
CN
China
Prior art keywords
image
processed
pixel value
cutting line
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211236456.9A
Other languages
Chinese (zh)
Inventor
钱昭焱
马原
晏文仲
田楷
李建达
胡江洪
曹彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fitow Tianjin Detection Technology Co Ltd
Original Assignee
Fitow Tianjin Detection Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fitow Tianjin Detection Technology Co Ltd filed Critical Fitow Tianjin Detection Technology Co Ltd
Priority to CN202211236456.9A priority Critical patent/CN115588018A/en
Publication of CN115588018A publication Critical patent/CN115588018A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: obtaining an original pixel area, a cutting line pixel value and an overlapping area width of an image to be processed; generating a pixel area of each segmentation image of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area; and segmenting the image to be processed according to each segmented image pixel region to obtain a segmented image. And each segmentation image pixel area of the image to be processed corresponds to one segmentation image, and the image to be processed is segmented according to the pixel area of each segmentation image to obtain a plurality of segmentation images corresponding to the image to be processed. The segmentation method has the advantages that the segmentation images of the image to be processed have certain width of the overlapping area, the target in the image to be processed is prevented from being segmented, accordingly, the situations of false detection and missing detection are avoided, and the image processing effect is improved.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
The application field of utilizing a deep neural network model to process images relates to the aspects of human life and work, and generally, a picture to be processed is input into the model, and the model is used for carrying out feature extraction, analysis, detection and the like to obtain a processing result. However, if the captured clear image is directly input into the detection model, more information is lost, and the detection result is affected. The existing solution is to cut the original image and divide it into small images to be input into the network, but the existing processing method is easy to have false detection and missing detection.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, where each segmented image pixel region of an image to be processed is generated according to an original pixel value, a cut line pixel value, and an overlapping region width, where each segmented image pixel region corresponds to one segmented image, and the image to be processed is segmented according to the pixel region of each segmented image, so as to obtain a plurality of segmented images corresponding to the image to be processed. The segmentation method has the advantages that the segmentation images of the image to be processed have certain width of the overlapping area, the target in the image to be processed is prevented from being segmented, accordingly, the situations of false detection and missing detection are avoided, and the image processing effect is improved.
In a first aspect, an embodiment of the present application provides an image processing method, including: obtaining an original pixel area, a cutting line pixel value and an overlapping area width of an image to be processed; generating a pixel area of each segmentation image of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area; and segmenting the image to be processed according to each segmented image pixel region to obtain a segmented image.
In the implementation process, each segmentation image pixel area of the image to be processed is generated according to the original pixel value, the cutting line pixel value and the width of the overlapping area, a certain width of the overlapping area is formed between the segmentation image pixel areas on the two sides of the cutting line, and the image to be processed is segmented according to the segmentation image pixel areas, so that the target in the image to be processed can be prevented from being segmented, and the image processing accuracy is improved.
Optionally, in this embodiment of the present application, obtaining an original pixel region, a cut line pixel value, and an overlap region width of an image to be processed includes: generating cutting line pixel values according to the original pixel area and the subsequent processing pixel area; the subsequent processing pixel area represents a pixel area which needs to be met when the image to be processed is subjected to subsequent processing; inputting an image to be processed into a preset key point detection model to obtain edge key points of a target to be detected in the image to be processed; and obtaining the width of the overlapping area according to the pixel value of the cutting line and the edge key point of the target to be detected.
In the implementation process, a cut line pixel value is generated according to the original pixel area and the subsequent processing pixel area, that is, the cut line pixel value is generated according to the size relationship of the pixel areas of the original pixel area and the subsequent processing pixel area, and the subsequent processing pixel area is a pixel area which needs to be met when the image to be processed is subjected to subsequent processing. And after the cutting line pixel value is determined, obtaining the width of the overlapping area according to the cutting line pixel value and the edge key point of the target to be detected. The segmentation images at two sides of the pixel value along the cutting line have the width of an overlapping area, so that the object positioned on the cutting line is prevented from being segmented, and information loss in subsequent processing is avoided.
Optionally, in this embodiment of the present application, after segmenting the image to be processed according to each segmented image pixel region to obtain a segmented image, the method further includes: obtaining coordinate information of the segmented image; converting the coordinate information of the segmented image to obtain reduced coordinate information of the segmented image; and the reduced coordinate information represents the coordinate information in the image to be processed obtained by splicing the plurality of segmented images.
In the implementation process, after the image to be processed is segmented, the coordinate information of the segmented image is obtained, and the information of the segmented image is converted into the restored coordinate information in the image to be processed by performing corresponding overlapping and splicing on the segmented image. And converting the coordinates to enable the coordinates of each segmented image to correspond to the restored coordinates in the image to be processed, and finishing the processing of the image to be processed.
Optionally, in this embodiment of the present application, converting the coordinate information of the sliced image to obtain the restored coordinate information of the sliced image includes: sequentially inputting a plurality of segmentation images into a preset detection model according to the index numbers of the segmentation images to obtain a detection result; the detection result comprises coordinate information of each segmented image, and the index number is obtained according to the position relation of the segmented image in the image to be processed; converting the coordinate information of the segmented image through a coordinate conversion formula to obtain the reduced coordinate information of the segmented image; the coordinate conversion formula includes: y = Y-hi + (H/N) × i; y is a coordinate value of the segmentation direction in the restored coordinate information, Y is a coordinate value of the segmentation direction in the coordinate information, hi is the width of an overlapping area corresponding to the segmented image with the index number i, H is a pixel value of the image to be processed in the segmentation direction, and N is the segmentation quantity of the image to be processed; i is the index number of the segmentation image, and i belongs to [0,N-1].
In the implementation process, the coordinate information of the segmented images is converted through a coordinate conversion formula, the restored coordinate information of the segmented images is obtained, and each segmented image is uniformly converted, so that the subsequent processing is facilitated.
Optionally, in this embodiment of the present application, the original pixel region includes a pixel value of the to-be-processed image in the slicing direction; the segmentation image comprises a first segmentation image and a second segmentation image; generating each segmentation image pixel area of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area,the method comprises the following steps: generating each segmentation image pixel area of the image to be processed through a division formula according to the cutting line pixel value and the width of the overlapping area; the dividing formula comprises: the pixel area of the first slice image is: [0, (H/2) + hi](ii) a The pixel area of the second sliced image is: [ (H/2) -hi, H](ii) a H is a pixel value of the image to be processed in the segmentation direction; h/2 is a cutting line pixel value; h is a total of i The width of the overlapping area corresponds to the cut-out image.
In the implementation process, each segmentation image pixel area of the image to be processed is generated through a segmentation formula, each segmentation image pixel area corresponds to one segmentation image, the pixel areas of the segmentation images are divided, and the segmentation of the image to be processed according to the cutting lines is avoided so that the target is segmented.
Optionally, in an embodiment of the present application, the method further includes: respectively obtaining edge key points of the target to be detected corresponding to two sides of the pixel value of the cutting line; and acquiring the width of an overlapping area corresponding to each divisional image at two sides of the cutting line pixel value according to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value.
In the implementation process, the width of the overlapping region corresponding to each of the segmentation images on the two sides of the cutting line pixel value can be respectively determined according to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value, and the width of the overlapping region of each segmentation image can be flexibly determined, so that each segmentation image has accurate width of the overlapping region.
Optionally, in an embodiment of the present application, wherein the image to be processed includes a gear tooth surface image; the overlap region width comprises a first overlap region width and a second overlap region width; the object to be detected comprises a gear; the edge key points comprise the top point of each tooth and the bottom point of each tooth in the tooth surface of the gear; obtaining the width of the overlapping area according to the pixel value of the cutting line and the edge key point of the target to be detected, and the method comprises the following steps: respectively obtaining a first key point of the gear closest to the pixel value of the cutting line and a second key point of the gear closest to the pixel value of the cutting line according to the pixel value of the cutting line, the top point of each tooth in the tooth surface of the gear and the bottom point of each tooth; the first key point and the second key point of the gear are respectively positioned at two sides of the pixel value of the cutting line; obtaining the width of a first overlapping area according to the distance from a first key point of the gear to the pixel value of the cutting line; and obtaining the width of the second overlapping area according to the distance from the second key point of the gear to the pixel value of the cutting line.
In the implementation process, the image to be processed comprises a gear tooth surface image; for two sides of one cutting line, the width of a first overlapping area and the width of a second overlapping area are respectively arranged, and an object to be detected comprises a gear; according to the top point of each tooth and the bottom point of each tooth in the gear surface of the gear surface image, a first key point of the gear closest to the pixel value of the cutting line and a second key point of the gear closest to the pixel value of the cutting line are respectively obtained, so that the width of the first overlapping area and the width of the second overlapping area in the gear surface image are determined, each tooth of the gear is ensured not to be cut, and the integrity of information of the gear image when the gear image is subsequently processed is ensured.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including: the acquisition module is used for acquiring an original pixel area, a cutting line pixel value and an overlapping area width of an image to be processed; the pixel area generating module is used for generating each segmentation image pixel area of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area; and the segmentation module is used for segmenting the image to be processed according to each segmented image pixel region to obtain a segmented image.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor, and the computer program performs the method described above.
By adopting the image processing method, the image processing device, the electronic equipment and the storage medium, the image to be processed is segmented according to the pixel region of the segmented image, so that the segmented images of the image to be processed have a certain width of an overlapping region, the target in the image to be processed can be prevented from being segmented, and the image processing accuracy is improved. The coordinate information of the segmented images is converted through a coordinate conversion formula, the restored coordinate information of the segmented images is obtained, and each segmented image is converted in a unified mode, so that subsequent processing is facilitated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the width of the overlap region provided in the present embodiment;
FIG. 3 is a schematic diagram of the width of the overlapping region of the gear tooth surface images provided by the embodiment of the application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are merely used to more clearly illustrate the technical solutions of the present application, and therefore are only examples, and the protection scope of the present application is not limited thereby.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
In the description of the embodiments of the present application, the technical terms "first", "second", and the like are used only for distinguishing different objects, and are not to be construed as indicating or implying relative importance or implicitly indicating the number, specific order, or primary-secondary relationship of the technical features indicated. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
When processing an image through a model, the model has a fixed input image size, which is assumed to be 800 × 800, i.e., 800 pixel values in the width direction and the height direction, respectively, and 2048 × 3125, i.e., 2048 pixel values in the width direction and 3125 pixel values in the height direction. If the original image is directly input into the model for processing, the original image is compressed by 2048 × 3125 to 800 × 800, the single-pixel precision in the width direction is lost by 2048/800=2.56 times, and the single-pixel precision in the height direction is lost by 3125/800=3.9 times. The loss of single-pixel accuracy is reflected in the loss of image features on the image, which are crucial to model training and reasoning. The original image is firstly cut into two or more segmentation images, and then the segmentation images are input into a model for training and reasoning, so that the sacrificial characteristics are reduced compared with the original image which is directly compressed. The more features the image retains, the better the learning degree of the model, and the more accurate the detection. In addition, if the original image is cropped, if an object in the original image is located on the cutting line, the object in the original image may be segmented, so that the object cannot be identified by any segmented image, and information in the delayed image is lost, thereby causing missing detection and false detection in subsequent processing. The present application thus provides an image processing method that solves the above-mentioned problem of the object being cropped.
Please refer to fig. 1, which is a schematic flow chart of an image processing method according to an embodiment of the present application.
Step S110: and obtaining an original pixel area, a cutting line pixel value and an overlapping area width of the image to be processed.
The embodiment of the step S110 includes: obtaining an original pixel area of an image to be processed, wherein the image to be processed can be a complete image containing a target, such as a part image shot by a camera; the original pixel area of the image to be processed is the pixel area range of the image to be processed without clipping processing so as to represent the size of the image to be processed. The cut-line pixel values are pixel values of cut lines pre-cut in the image to be processed, i.e. the positions of the cut lines are located in the image to be processed by the cut-line pixel values. The cutting line pixel value can be determined according to the number of the images to be processed needing to be cut and the size of the cut images, and can also be set to be a determined value. The width of the overlapping area is the width of the mutual overlapping part between the split images in the image to be processed, and the width of the overlapping area can be set as a determined value and can also be generated according to the target calculation in the split images; and the width of the overlapping area of each sliced image may be the same and may be set separately according to the object in each sliced image.
Step S120: and generating each segmentation image pixel area of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area.
The embodiment of the step S120 includes: each segmentation image pixel area of the image to be processed comprises position information and size information of each segmentation image in the image to be processed. According to the original pixel value, the cutting line pixel value and the width of the overlapping region, the pixel region of each segmentation image for generating the image to be processed may be: according to the original pixel value and the cutting line pixel value, an original pixel area of the split image is determined, and then the original pixel area of the split image is expanded according to the width of the obtained overlapping area, so that the split images on two sides of the cutting line pixel value have the overlapping area after the image to be processed is split.
Specific examples thereof include: the size W × H of the image to be processed is a pixel value in the width direction, and H is a pixel value in the height direction. And performing image segmentation in the H direction, wherein the pixel value of a cutting line is set as the pixel value of H/2, the original pixel range of the segmentation image A is [0,H/2], and the original pixel range of the segmentation image B is [ (H/2) +1,H ]. If the obtained width of the overlapping area is H, increasing pixels corresponding to the width H of the overlapping area in the original pixel range of the segmented image A in the H direction to obtain a pixel area of the segmented image A; and increasing pixels corresponding to the width of the overlapping area in the H direction in the original pixel range of the segmentation image B to obtain a pixel area of the segmentation image B.
Step S130: and segmenting the image to be processed according to each segmented image pixel area to obtain a segmented image.
The embodiment of the step S130 includes: determining each segmentation image pixel area, namely determining the pixel position and the size of the segmentation image in the image to be processed, and segmenting the image to be processed according to each segmentation image pixel area to obtain a segmentation image of the image to be processed.
In the implementation process, each split image pixel area of the image to be processed is generated according to the original pixel value, the cutting line pixel value and the width of the overlapping area, a certain width of the overlapping area is formed between the split image pixel areas on the two sides of the cutting line, and the image to be processed is split according to the split image pixel areas, so that the target in the image to be processed can be prevented from being split, and the image processing accuracy rate is improved.
Optionally, in this embodiment of the present application, obtaining an original pixel region, a cut line pixel value, and an overlap region width of an image to be processed includes: generating cutting line pixel values according to the original pixel area and the subsequent processing pixel area; the subsequent processing pixel area represents a pixel area which needs to be met when the image to be processed is subjected to subsequent processing; inputting an image to be processed into a preset key point detection model to obtain edge key points of a target to be detected in the image to be processed; and obtaining the width of the overlapping area according to the pixel value of the cutting line and the edge key point of the target to be detected.
The implementation mode of the steps is as follows: and generating a cutting line pixel value according to the original pixel area and the subsequent processing pixel area, wherein the subsequent processing pixel area represents a pixel area which needs to be met when the image to be processed is subjected to subsequent processing. The subsequent processing may be a model to which the image is to be segmented, and the pixel region to be conformed may be a fixed input image of the model. The subsequent processing may also be other processing modes that have requirements for dividing image sizes or pixel areas. The cut line pixel values are generated according to the original pixel region and the subsequent processing pixel region, specifically, for example, the width of the to-be-processed image is 700 pixel values, and the height of the to-be-processed image is 1400 pixel values, and the input image of the subsequent processing model is 800 × 800, that is, the width is 800 pixel values, and the height is 800 pixel values. The cut line may be determined to be 700 pixel values high so that the original pixel values of the two sliced images sliced according to the cut line are 700 x 700, conforming to the size of the model fixed input. It should be noted that, when determining the pixel value of the cutting line, a position of the width of the overlapping region needs to be reserved for the cut image, so that the original pixel value of the cut image is increased by the width of the overlapping region, and the requirement of the pixel region for subsequent processing is still met.
Inputting an image to be processed into a preset key point detection model, obtaining edge key points of targets to be detected in the image to be processed, if the image to be processed comprises a plurality of targets to be detected, obtaining edge key points of each target to be detected, wherein the edge key points of the targets to be detected can be contour points of the targets to be detected, and obtaining the width of an overlapping area according to the position relation of the edge key points of the targets to be detected and the pixel values of cutting lines. For example, if a plurality of edge key points are detected, the edge key point closest to the pixel value of the cut line is obtained, and the vertical distance from the edge key point closest to the pixel value of the cut line is the width of the overlap region. It will be appreciated that edge keypoints that are the second closest in pixel value from the cut-line may also be used as edge keypoints for determining the width of the overlap region.
In the implementation process, a cut line pixel value is generated according to the original pixel area and the subsequent processing pixel area, that is, the cut line pixel value is generated according to the size relationship of the pixel areas of the original pixel area and the subsequent processing pixel area, and the subsequent processing pixel area is a pixel area which needs to be met when the image to be processed is subjected to subsequent processing. And after the cutting line pixel value is determined, obtaining the width of the overlapping area according to the cutting line pixel value and the edge key point of the target to be detected. The segmentation images on both sides of the pixel value along the cutting line are made to have an overlapping region width to avoid that the object located at the cutting line is segmented.
Optionally, in this embodiment of the present application, after segmenting the image to be processed according to each segmented image pixel region to obtain a segmented image, the method further includes: obtaining coordinate information of the segmented image; converting the coordinate information of the segmented image to obtain reduced coordinate information of the segmented image; the reduced coordinate information represents coordinate information in an image to be processed obtained by splicing a plurality of segmented images.
The implementation manner of the above steps is as follows: the method comprises the steps of obtaining coordinate information of a segmented image, wherein the coordinate information can be the center coordinate of the segmented image, the center coordinate can be the center coordinate of a detection frame of the segmented image obtained after the segmented image is input into a defect detection model for defect detection, the center coordinate of the segmented image is obtained, the segmented images on the two sides of a cutting line are provided with overlapping areas, the center coordinate of the segmented image needs to be converted into reduction coordinate information if the center coordinate needs to be represented in an image to be processed, and the reduction coordinate information represents the coordinate information in the image to be processed obtained by splicing a plurality of the segmented images. The image to be processed is an image which is not subjected to segmentation, and the coordinate information in the image to be processed can be the central coordinate of the detection frame of the image which is not subjected to segmentation. The method for splicing the segmented images to obtain the images to be processed can be splicing or overlapping splicing of the segmented images according to the pixel areas of the segmented images to obtain the spliced complete images to be processed.
In the implementation process, after the image to be processed is segmented, the coordinate information of the segmented image is obtained, and the information of the segmented image is converted into the restored coordinate information in the image to be processed by performing corresponding overlapping and splicing on the segmented image. And converting the coordinates to enable the coordinates of each segmented image to correspond to the restored coordinates in the image to be processed, and finishing the processing of the image to be processed.
Optionally, in this embodiment of the present application, converting the coordinate information of the sliced image to obtain the restored coordinate information of the sliced image includes: sequentially inputting a plurality of segmentation images into a preset detection model according to the index numbers of the segmentation images to obtain a detection result; the detection result comprises coordinate information of each segmented image, and the index number is obtained according to the position relation of the segmented images in the image to be processed; converting the coordinate information of the segmented image through a coordinate conversion formula to obtain the reduced coordinate information of the segmented image; the coordinate conversion formula includes: y = Y-hi + (H/N) × i; y is a coordinate value of the segmentation direction in the restored coordinate information, Y is a coordinate value of the segmentation direction in the coordinate information, hi is the width of an overlapping area corresponding to the segmented image with the index number i, H is a pixel value of the image to be processed in the segmentation direction, and N is the segmentation quantity of the image to be processed; i is the index number of the segmentation image, and i belongs to [0,N-1].
The implementation manner of the above steps is as follows: the segmentation image input preset detection model has a sequence, and the segmentation image can be input into the preset detection model in sequence according to the index number of the segmentation image, wherein the index number is obtained according to the position relation of the segmentation image in the image to be processed. For example, a group of sliced images is called a batch, and the number of sliced images in the group of sliced images is called a batch size. The model takes the segmentation image of a batch for processing each time, and outputs a detection result according to the sequence of the segmentation images in the batch. If the size of the image to be processed is W × H, W is the pixel value in the width direction, and H is the pixel value in the height direction. And (3) dividing the image to be processed into N divided images, and setting the batch size = N of the model, namely, taking N divided images for processing by the model each time. The N Zhang Qiefen images have index numbers respectively, and are arranged according to the index numbers
And after the segmentation image is input into a preset detection model, obtaining a detection result. The detection result comprises coordinate information of each segmented image, wherein the coordinate information of the segmented images can be the central coordinate of a detection frame of the segmented images obtained after the segmented images are input into a defect detection model for defect detection. Specifically, for example, the detection result may be a one-dimensional tensor composed of five elements, that is, [ x, y, w, h, t ], where (x, y) is a central coordinate of the sliced image, (w, h) is a width and a height of an image detection frame in the sliced image, and t is confidence information.
Converting the coordinate information of the segmented image through a coordinate conversion formula to obtain the reduced coordinate information of the segmented image; the coordinate conversion formula includes: y = Y-hi + (H/N) × i; y is a coordinate value of the segmentation direction in the restored coordinate information, Y is a coordinate value of the segmentation direction in the coordinate information, hi is the width of an overlapping area corresponding to the segmented image with the index number i, H is a pixel value of the image to be processed in the segmentation direction, and N is the segmentation quantity of the image to be processed; i is the index number of the segmentation image, and i belongs to [0,N-1].
Wherein h is i The width of the overlapping region corresponding to the segmented image with index number i, specifically, the width of the overlapping region corresponding to the segmented image with index number 1 is h 1 The width of the overlapping region corresponding to the index number 2 of the segmentation image is h 2
It should be noted that in the embodiment of the present application, the segmentation is performed in the H direction, and the segmentation is not performed in the other direction W, so that X in the coordinate information of the segmented image is equal to X in the restored coordinate information, that is, X = X; wherein, X is the coordinate value of the direction which is not segmented in the reduced coordinate information, and X is the coordinate value of the direction which is not segmented in the coordinate information.
In the implementation process, the coordinate information of the segmented images is converted through a coordinate conversion formula, the restored coordinate information of the segmented images is obtained, and each segmented image is converted in a unified mode, so that the subsequent processing is facilitated.
Optionally, in this embodiment of the present application, the original pixel region includes a pixel value of the to-be-processed image in the slicing direction; the segmentation image comprises a first segmentation image and a second segmentation image; generating each segmentation image pixel area of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area, wherein the method comprises the following steps: generating each segmentation image pixel area of the image to be processed through a division formula according to the cutting line pixel value and the width of the overlapping area; the dividing formula comprises: the pixel area of the first slice image is: [0, (H/2) + hi](ii) a The pixel area of the second sliced image is: [ (H/2) -hi, H](ii) a H is a pixel value of the image to be processed in the segmentation direction; h/2 is a cutting line pixel value; h is i For cutting into a mapLike corresponding to the overlap region width.
The implementation manner of the above steps is as follows: the starting pixel area comprises a pixel value of the image to be processed in the segmentation direction, and if the image to be processed is segmented in the H direction, the pixel value in the segmentation direction is the pixel value in the H direction. And if the image to be processed comprises a cutting line, segmenting the image to be processed into two segmented images, wherein the segmented images comprise a first segmented image and a second segmented image.
Generating each segmentation image pixel area of the image to be processed through a division formula according to the cutting line pixel value and the width of the overlapping area; the dividing formula comprises: the pixel area of the first slice image is: [0, (H/2) + hi](ii) a H is a pixel value of the image to be processed in the segmentation direction; h/2 is a cutting line pixel value; h is i The width of the overlapping area corresponds to the cut-out image. In one embodiment, the first cut image is located below the image to be processed, the original pixel area of the first cut image is (0, (H/2)), and the original pixel area is added with the width H of the overlapping area corresponding to the first cut image i Then, the pixel area of the first cut image is: [0, (H/2) + hi]。
The pixel area of the second sliced image is: [ (H/2) -hi, H](ii) a H is a pixel value of the image to be processed in the segmentation direction; h/2 is a cutting line pixel value; h is i The width of the overlapping area corresponds to the cut-out image. The original pixel region of the first divided image is ((H/2), H), and the width H of the overlapping region is obtained by adding the original pixel region to the second divided image i And because the second segmentation image is positioned above the image to be processed, the pixel area of the second segmentation image is as follows: [ (H/2) -hi, H]。
In the implementation process, each segmentation image pixel area of the image to be processed is generated through a segmentation formula, each segmentation image pixel area corresponds to one segmentation image, the pixel areas of the segmentation images are divided, and the segmentation of the image to be processed according to the cutting lines is avoided so that the target is segmented.
Please refer to fig. 2, which illustrates a schematic diagram of the width of the overlapping region according to the embodiment of the present application.
Optionally, in an embodiment of the present application, the method further includes: respectively obtaining edge key points of the target to be detected corresponding to two sides of the pixel value of the cutting line; and acquiring the width of an overlapping area corresponding to each divisional image at two sides of the cutting line pixel value according to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value.
The implementation manner of the above steps is as follows: respectively obtaining edge key points of the target to be detected corresponding to two sides of the pixel value of the cutting line, and if the image to be processed has one cutting line, obtaining the edge key points of the target to be detected corresponding to two sides of the pixel value of the cutting line; if the image to be processed has a plurality of cutting lines, obtaining edge key points of the target to be detected corresponding to two sides of the pixel value of each cutting line.
According to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value, as shown in fig. 2, the edge key points above the cutting line pixel value are obtained at H/2 of the image to be processed and are respectively a first key point and a third key point, wherein the first key point is the key point closest to the cutting line pixel value. The vertical distance of the third keypoint from the pixel value of the cutting line may also be used as the first overlap region width.
And acquiring edge key points below the pixel values of the cutting line, wherein the edge key points are respectively a second key point and a fourth key point. The second key point is the key point closest to the pixel value of the cutting line, and the vertical distance from the second key point to the pixel value of the cutting line is the width of a second overlapping area, namely the width of the overlapping area of the split image below the pixel value of the cutting line. And obtaining the width of the overlapping area corresponding to each divisional image on two sides of the pixel value of the cutting line.
The direction of the cutting line and the position of the pixel value may be set according to an actual situation, which is not limited in the embodiment of the present application, for example, if the direction of the cutting line is horizontal, the widths of the overlapping regions corresponding to the split images on the upper and lower sides of the cutting line are respectively obtained; if the direction of the cutting line is vertical, the widths of the overlapping areas corresponding to the cutting images on the left side and the right side of the cutting line are respectively obtained.
In the implementation process, the width of the overlapping region corresponding to each of the segmentation images on the two sides of the cutting line pixel value can be respectively determined according to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value, and the width of the overlapping region of each segmentation image can be flexibly determined, so that each segmentation image has accurate width of the overlapping region.
Optionally, in an embodiment of the present application, wherein the image to be processed includes a gear tooth surface image; the overlap region width comprises a first overlap region width and a second overlap region width; the object to be detected comprises a gear; the edge key points comprise the top point of each tooth and the bottom point of each tooth in the tooth surface of the gear; obtaining the width of the overlapping area according to the pixel value of the cutting line and the edge key point of the target to be detected, and the method comprises the following steps: respectively obtaining a first key point of the gear closest to the pixel value of the cutting line and a second key point of the gear closest to the pixel value of the cutting line according to the pixel value of the cutting line, the top point of each tooth in the tooth surface of the gear and the bottom point of each tooth; the first key point and the second key point of the gear are respectively positioned at two sides of the pixel value of the cutting line; obtaining the width of a first overlapping area according to the distance from the first key point of the gear to the pixel value of the cutting line; and obtaining the width of the second overlapping area according to the distance from the second key point of the gear to the pixel value of the cutting line.
Please refer to fig. 3, which illustrates a width schematic diagram of an overlapping region of gear tooth surface images provided by the embodiment of the present application.
The implementation manner of the above steps is as follows: the image to be processed comprises a gear tooth surface image; the overlapping area width comprises a first overlapping area width and a second overlapping area width, wherein the first overlapping area width and the second overlapping area width are respectively the overlapping area widths corresponding to the cut images on two sides of the pixel value of the cutting line.
As shown in fig. 3, the object to be detected includes a gear, specifically, each tooth in the gear; the edge key points comprise a top point of each tooth and a bottom point of each tooth in the tooth surface of the gear; each tooth in the gear tooth surface image is arranged in a slant line, and the top point and the bottom point of one slant line are respectively the top point and the bottom point of one tooth. Specifically, for example, the vertex of each tooth and the nadir of each tooth in the tooth surface of the gear include: a first tooth bottom point, a second tooth bottom point, a third tooth bottom point, a fourth tooth top point and bottom point, a fifth tooth top point and bottom point, a sixth tooth top point and a seventh tooth top point.
In this embodiment, the first side of the pixel value of the cut line is the lower side of the pixel value of the cut line, and the second side of the pixel value of the cut line is the upper side of the pixel value of the cut line. Obtaining a first key point, namely a fourth tooth bottom point, of the gear closest to the pixel value of the cutting line in the first side; and obtaining a second key point of the gear, namely a fourth addendum point, in the second side, closest to the pixel value of the cutting line; the first side and the second side are two sides of the pixel value of the cutting line respectively, and correspondingly, the first key point of the gear and the second key point of the gear are also located at two sides of the pixel value of the cutting line respectively.
Obtaining the width of a first overlapping area according to the distance from a first key point of the gear, namely a third tooth bottom point, to a cutting line pixel value; and obtaining a second overlapping area width according to the distance from the second key point of the gear, namely the fifth addendum point, to the pixel value of the cutting line. If the first overlap region width is on the first side of the cut-line pixel value, the first overlap region width is the overlap region width corresponding to the cut-out image on the second side of the cut-line pixel value.
In the implementation process, the image to be processed comprises a gear tooth surface image; for two sides of one cutting line, the width of a first overlapping area and the width of a second overlapping area are respectively arranged, and an object to be detected comprises a gear; according to the top point of each tooth and the bottom point of each tooth in the gear tooth surface of the gear tooth surface image, a first key point of the gear closest to the pixel value of the cutting line and a second key point of the gear closest to the pixel value of the cutting line are respectively obtained, so that the width of a first overlapping area and the width of a second overlapping area in the gear tooth surface image are determined, each tooth of the gear is ensured not to be cut, and the integrity of the gear image information transmitted to a subsequent processing module is ensured.
In a preferred embodiment, the image to be processed may be sliced into a plurality of sliced images according to the pixel values of the plurality of cutting lines, and the slicing manner may be equal or unequal. If the segmentation mode is equal, specifically, for example, if the segmentation direction is the H direction, and N segmented images are segmented together, the cutting line pixel values of N-1 cutting lines are needed, and the cutting line pixel values are respectively: the pixel value of the first cutting line is H/N, and the pixel value of the second cutting line is (2*H)/N. And respectively calculating the width of the corresponding overlapping area of the segmentation images at the two sides of each cutting line according to the edge key points at the two sides of the pixel value of each cutting line, which are closest to the pixel value of the cutting line. It should be noted that, if the image to be processed is to be sliced into a plurality of sliced images, the sliced images in the middle region have two overlapping region widths in the slicing direction H, and when determining the pixel region of the sliced image, the corresponding overlapping region widths need to be added to the original pixel region on both sides in the slicing direction, so as to form the pixel region of the sliced image.
Please refer to fig. 4, which illustrates a schematic structural diagram of an image processing apparatus according to an embodiment of the present application; an embodiment of the present application provides an image processing apparatus 200, including:
an obtaining module 210, configured to obtain an original pixel region, a cut line pixel value, and an overlapping region width of an image to be processed;
a pixel area generating module 220, configured to generate each segmented image pixel area of the to-be-processed image according to the original pixel value, the cutting line pixel value, and the width of the overlapping area;
and the segmentation module 230 is configured to segment the image to be processed according to each segmented image pixel region, so as to obtain a segmented image.
Optionally, in this embodiment of the present application, the image processing apparatus 200, the obtaining module 210, is specifically configured to generate a cut line pixel value according to an original pixel region and a subsequent processing pixel region; the post-processing pixel area represents a pixel area which needs to be met when the image to be processed is subjected to post-processing; inputting an image to be processed into a preset key point detection model to obtain edge key points of a target to be detected in the image to be processed; and obtaining the width of the overlapping area according to the pixel value of the cutting line and the edge key point of the target to be detected.
Optionally, in this embodiment of the present application, the image processing apparatus 200 further includes: the restoring module is used for obtaining coordinate information of the segmented image; converting the coordinate information of the segmented image to obtain reduced coordinate information of the segmented image; the reduced coordinate information represents coordinate information in an image to be processed obtained by splicing a plurality of segmented images.
Optionally, in this embodiment of the application, the image processing apparatus 200 is a restoring module, and is specifically configured to sequentially input the multiple segmented images into a preset detection model according to the index numbers of the segmented images, so as to obtain a detection result; the detection result comprises coordinate information of each segmented image, and the index number is obtained according to the position relation of the segmented images in the image to be processed; converting the coordinate information of the segmented image through a coordinate conversion formula to obtain the restored coordinate information of the segmented image; the coordinate conversion formula includes: y = Y-hi + (H/N) × i; wherein Y is the coordinate value of the cutting direction in the reduced coordinate information, Y is the coordinate value of the cutting direction in the coordinate information, h i The width of an overlapping area corresponding to the segmented image with the index number i, H is a pixel value of the image to be processed in the segmentation direction, and N is the segmentation quantity of the image to be processed; i is the index number of the segmentation image, i belongs to [0,N-1 [ ]]。
Optionally, in this embodiment of the present application, the image processing apparatus 200, wherein the original pixel region includes pixel values of the image to be processed in the slicing direction; the segmentation image comprises a first segmentation image and a second segmentation image; a pixel area generating module 220, configured to generate each segmented image pixel area of the to-be-processed image according to the cutting line pixel value and the width of the overlapping area through a partition formula; the dividing formula comprises: the pixel area of the first slice image is: [0, (H/2) + hi](ii) a The pixel area of the second sliced image is: [ (H/2) -hi, H](ii) a H is a pixel value of the image to be processed in the segmentation direction; h/2 is a cutting line pixel value;h i the width of the overlapping area corresponds to the cut-out image.
Optionally, in this embodiment of the present application, the image processing apparatus 200 further includes: the overlapping area acquisition module is used for respectively acquiring edge key points of the target to be detected corresponding to two sides of the pixel value of the cutting line; and acquiring the width of an overlapping area corresponding to each divisional image at two sides of the cutting line pixel value according to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value.
Alternatively, in the embodiment of the present application, the image processing apparatus 200, wherein the image to be processed includes a gear tooth surface image; the overlap region width comprises a first overlap region width and a second overlap region width; the object to be detected comprises a gear; the edge key points comprise a top point of each tooth and a bottom point of each tooth in the tooth surface of the gear; the obtaining module 210 is further configured to obtain, according to the cutting line pixel value, a top point of each tooth in the tooth surface of the gear, and a bottom point of each tooth, a first key point of the gear closest to the cutting line pixel value and a second key point of the gear closest to the cutting line pixel value, respectively; the first key point and the second key point of the gear are respectively positioned at two sides of the pixel value of the cutting line; obtaining the width of a first overlapping area according to the distance from the first key point of the gear to the pixel value of the cutting line; and obtaining the width of the second overlapping area according to the distance from the second key point of the gear to the pixel value of the cutting line.
It should be understood that the apparatus corresponds to the above-mentioned embodiment of the image processing method, and can perform the steps related to the above-mentioned embodiment of the method, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software function that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
Please refer to fig. 5, which illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application. An electronic device 300 provided in an embodiment of the present application includes: a processor 310 and a memory 320, the memory 320 storing machine readable instructions executable by the processor 310, the machine readable instructions when executed by the processor 310 performing the method as above.
The embodiment of the application also provides a storage medium, wherein the storage medium is stored with a computer program, and the computer program is executed by a processor to execute the method.
The storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.

Claims (10)

1. An image processing method, comprising:
obtaining an original pixel area, a cutting line pixel value and an overlapping area width of an image to be processed;
generating each segmentation image pixel area of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area; and
and segmenting the image to be processed according to the pixel area of each segmented image to obtain a segmented image.
2. The method of claim 1, wherein obtaining the original pixel area, the cut-line pixel value, and the overlap area width of the image to be processed comprises:
generating the cutting line pixel value according to the original pixel area and the subsequent processing pixel area; the subsequent processing pixel area represents a pixel area which needs to be met when the image to be processed is subjected to subsequent processing;
inputting the image to be processed into a preset key point detection model to obtain edge key points of a target to be detected in the image to be processed;
and obtaining the width of the overlapping area according to the cutting line pixel value and the edge key point of the target to be detected.
3. The method according to claim 1, wherein after the segmenting the image to be processed according to the each segmented image pixel region to obtain the segmented image, the method further comprises:
obtaining coordinate information of the segmented image;
converting the coordinate information of the segmented image to obtain reduced coordinate information of the segmented image; and the reduced coordinate information represents the coordinate information in the image to be processed obtained by splicing the plurality of segmented images.
4. The method according to claim 3, wherein the converting the coordinate information of the sliced image to obtain the restored coordinate information of the sliced image comprises:
sequentially inputting a plurality of the segmentation images into a preset detection model according to the index numbers of the segmentation images to obtain a detection result; the detection result comprises coordinate information of each segmented image, and the index number is obtained according to the position relation of the segmented images in the image to be processed;
converting the coordinate information of the segmented image through a coordinate conversion formula to obtain the reduced coordinate information of the segmented image;
the coordinate conversion formula includes: y = Y-hi + (H/N) × i;
y is a coordinate value of a segmentation direction in the restored coordinate information, Y is a coordinate value of the segmentation direction in the coordinate information, hi is the width of an overlapping area corresponding to the segmented image with the index number i, H is a pixel value of the image to be processed in the segmentation direction, and N is the segmentation quantity of the image to be processed; i is the index number of the segmentation image, and i belongs to [0,N-1].
5. The method according to claim 1, wherein the original pixel region comprises pixel values of the image to be processed in a slicing direction; the segmentation image comprises a first segmentation image and a second segmentation image; generating each segmentation image pixel area of the image to be processed according to the original pixel value, the cutting line pixel value and the width of the overlapping area, wherein the method comprises the following steps:
generating each segmentation image pixel area of the image to be processed through a division formula according to the cutting line pixel value and the width of the overlapping area; the division formula includes:
the pixel area of the first cut image is: [0, (H/2) + hi ];
the pixel area of the second segmentation image is: [ (H/2) -hi, H](ii) a H is a pixel value of the image to be processed in the segmentation direction; h/2 is the cutting line pixel value; h is a total of i The width of the overlapping area corresponds to the cutting image.
6. The method of claim 2, further comprising:
respectively obtaining edge key points of the target to be detected corresponding to two sides of the pixel value of the cutting line;
and obtaining the width of an overlapping area corresponding to each of the segmentation images at two sides of the cutting line pixel value according to the cutting line pixel value and the edge key points of the target to be detected corresponding to the two sides of the cutting line pixel value.
7. The method of claim 2, wherein the image to be processed comprises a gear tooth surface image; the overlap region width comprises a first overlap region width and a second overlap region width; the target to be detected comprises a gear; the edge key points include a top point of each tooth and a bottom point of each tooth in the gear tooth surface;
the obtaining the width of the overlapping area according to the cutting line pixel value and the edge key point of the target to be detected comprises:
according to the cutting line pixel value, the top point of each tooth in the tooth surface of the gear and the bottom point of each tooth, respectively obtaining a first key point of the gear closest to the cutting line pixel value and a second key point of the gear closest to the cutting line pixel value; the first key point and the second key point of the gear are respectively positioned at two sides of the pixel value of the cutting line;
obtaining the width of the first overlapping area according to the distance from the first key point of the gear to the pixel value of the cutting line; and
and obtaining the width of the second overlapping area according to the distance from the second key point of the gear to the pixel value of the cutting line.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an original pixel area, a cutting line pixel value and an overlapping area width of an image to be processed;
a pixel area generating module, configured to generate each split image pixel area of the to-be-processed image according to the original pixel value, the cutting line pixel value, and the width of the overlapping area; and
and the segmentation module is used for segmenting the image to be processed according to each segmented image pixel area to obtain a segmented image.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 7.
CN202211236456.9A 2022-10-10 2022-10-10 Image processing method and device, electronic equipment and storage medium Pending CN115588018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211236456.9A CN115588018A (en) 2022-10-10 2022-10-10 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211236456.9A CN115588018A (en) 2022-10-10 2022-10-10 Image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115588018A true CN115588018A (en) 2023-01-10

Family

ID=84780058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211236456.9A Pending CN115588018A (en) 2022-10-10 2022-10-10 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115588018A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116660273A (en) * 2023-07-28 2023-08-29 菲特(天津)检测技术有限公司 Chain piece missing detection method in chain and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116660273A (en) * 2023-07-28 2023-08-29 菲特(天津)检测技术有限公司 Chain piece missing detection method in chain and electronic equipment
CN116660273B (en) * 2023-07-28 2023-10-27 菲特(天津)检测技术有限公司 Chain piece missing detection method in chain and electronic equipment

Similar Documents

Publication Publication Date Title
EP1843294B1 (en) Motion vector calculation method, hand-movement correction device using the method, imaging device, and motion picture generation device
EP3309703B1 (en) Method and system for decoding qr code based on weighted average grey method
EP3079100B1 (en) Image processing apparatus, image processing method and computer readable storage medium
CN111340749B (en) Image quality detection method, device, equipment and storage medium
CN111415364B (en) Conversion method, system and storage medium for image segmentation sample in computer vision
CN115588018A (en) Image processing method and device, electronic equipment and storage medium
CN111598088B (en) Target detection method, device, computer equipment and readable storage medium
CN111260675B (en) High-precision extraction method and system for image real boundary
CN111209908B (en) Method, device, storage medium and computer equipment for updating annotation frame
CN116337072A (en) Construction method, construction equipment and readable storage medium for engineering machinery
CN111798422A (en) Checkerboard angular point identification method, device, equipment and storage medium
CN113657370B (en) Character recognition method and related equipment thereof
CN113592720B (en) Image scaling processing method, device, equipment and storage medium
CN112154479A (en) Method for extracting feature points, movable platform and storage medium
KR100691855B1 (en) Apparatus for extracting features from image information
CN115661131B (en) Image identification method and device, electronic equipment and storage medium
CN111340040A (en) Paper character recognition method and device, electronic equipment and storage medium
CN115984796A (en) Image annotation method and system
CN113657369B (en) Character recognition method and related equipment thereof
CN110686687B (en) Method for constructing map by visual robot, robot and chip
CN112614154A (en) Target tracking track obtaining method and device and computer equipment
CN113554024A (en) Method and device for determining cleanliness of vehicle and computer equipment
JP2011018175A (en) Character recognition apparatus and character recognition method
JPH11120351A (en) Image matching device and storage medium to store image matching program
CN115148047B (en) Parking space detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination