CN114494017A - Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale - Google Patents

Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale Download PDF

Info

Publication number
CN114494017A
CN114494017A CN202210089852.7A CN202210089852A CN114494017A CN 114494017 A CN114494017 A CN 114494017A CN 202210089852 A CN202210089852 A CN 202210089852A CN 114494017 A CN114494017 A CN 114494017A
Authority
CN
China
Prior art keywords
scale
image
value
pixel
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210089852.7A
Other languages
Chinese (zh)
Other versions
CN114494017B (en
Inventor
魏凯
王心安
王俊琪
王刚
汤林鹏
邰骋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moqi Technology Beijing Co ltd
Beijing Jianmozi Technology Co ltd
Original Assignee
Moqi Technology Beijing Co ltd
Beijing Jianmozi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moqi Technology Beijing Co ltd, Beijing Jianmozi Technology Co ltd filed Critical Moqi Technology Beijing Co ltd
Priority to CN202210089852.7A priority Critical patent/CN114494017B/en
Publication of CN114494017A publication Critical patent/CN114494017A/en
Application granted granted Critical
Publication of CN114494017B publication Critical patent/CN114494017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a method, a device, equipment and a medium for adjusting DPI (deep packet inspection) of an image according to a scale. After a scale division area image in an image to be processed is obtained, determining a pixel accumulated value corresponding to a first coordinate component of each pixel point of the scale division area image on a first area boundary to obtain a pixel accumulated value curve, so as to obtain a first coordinate component list corresponding to the pixel accumulated value curve and determine the number of the pixel points in unit scale of the scale; determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and the length unit conversion coefficient; and adjusting the size of the image to be processed based on the image adjustment coefficient so as to adjust the number of pixel points within unit length in the image to be processed to the number of target pixel points. The method improves the efficiency and accuracy of the image DPI adjustment.

Description

Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale
Technical Field
The application relates to the technical field of image processing, in particular to a method, a device, equipment and a medium for adjusting DPI (deep packet inspection) of an image according to a scale.
Background
In a fingerprint and palm print recognition system, the base library image is a fingerprint and palm print image which is usually pressed and printed at 500dpi, that is, each inch in the image comprises 500 pixel points. However, in images obtained by other methods, such as live finger and palm print images, it is generally difficult to ensure that the pixel resolution is 500dpi in non-contact shot finger and palm print images. Therefore, a method for adjusting the dpi of the photographed image to 500dpi is needed to conveniently compare with the image of the base library.
The existing method relies on putting a scale in shooting, the scale and the finger and palm prints to be shot are placed on the same plane for shooting, the number of the scale is found manually, the number of pixel points corresponding to unit scales is determined, and an image dpi and a proper reduction ratio are calculated according to the number, but the problems that manual reading is long in time consumption, labor consumption and incapable of processing images in batch automation exist.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a device, and a medium for adjusting a DPI of an image according to a scale, so as to solve the above problems in the prior art, and restore a captured image to an input image meeting a standard by automatically identifying the number of pixels included in each inch of the captured image, thereby improving the accuracy of scale reading and the processing efficiency of the image.
In a first aspect, a method for DPI adjustment according to a scale is provided, and the method may include:
acquiring a scale area image in an image to be processed; the scale area of the scale is rectangular;
determining a pixel accumulated value corresponding to a first coordinate component of each pixel point of the scale area image on a first area boundary to obtain a pixel accumulated value curve; for any target coordinate component in the first coordinate components, the corresponding pixel accumulated value is the sum of pixel values of all pixel points taking the target coordinate component as the first coordinate component in the scale region image; the point on the pixel accumulated value curve takes the first coordinate component of each pixel point on the first region boundary as the abscissa, and takes the pixel accumulated value corresponding to the first coordinate component of each pixel point on the first region boundary as the ordinate; the first area boundary is a boundary which is vertical to the scale mark in the boundary of the scale area image;
acquiring a first coordinate component list corresponding to an accumulated value peak value meeting a preset peak value condition on the pixel accumulated value curve; or acquiring a first coordinate component list corresponding to an accumulated value valley value meeting a preset valley value condition on the pixel accumulated value curve;
determining the number of pixel points in the unit scale of the scale according to the difference value of the first coordinate components with adjacent sizes in the first coordinate component list;
determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and a length unit conversion coefficient; the conversion coefficient of the length unit is the ratio of the length unit of inches to the physical length of the unit scale of the scale;
and adjusting the size of the image to be processed based on the image adjustment coefficient so as to adjust the number of pixel points within unit length in the image to be processed to the number of target pixel points.
In a second aspect, there is provided an apparatus for DPI adjustment according to a scale, the apparatus may include:
the acquisition unit is used for acquiring a scale area image in an image to be processed; the scale area of the scale is rectangular;
the determining unit is used for determining a pixel accumulated value corresponding to a first coordinate component of each pixel point of the scale area image on a first area boundary to obtain a pixel accumulated value curve; for any target coordinate component in the first coordinate components, the corresponding pixel accumulated value is the sum of pixel values of all pixel points taking the target coordinate component as the first coordinate component in the scale region image; the point on the pixel accumulated value curve takes the first coordinate component of each pixel point on the first region boundary as the abscissa, and takes the pixel accumulated value corresponding to the first coordinate component of each pixel point on the first region boundary as the ordinate; the first area boundary is a boundary which is vertical to the scale mark in the boundary of the scale area image;
the acquisition unit is also used for acquiring a first coordinate component list corresponding to an accumulated value peak value meeting a preset peak value condition on the pixel accumulated value curve; or acquiring a first coordinate component list corresponding to an accumulated value valley value meeting a preset valley value condition on the pixel accumulated value curve;
the determining unit is further configured to determine, as the number of pixels in the unit scale of the scale, an average distance value of distances between adjacent pixel positions in the target pixel position list in the direction of the first area boundary;
determining the number of pixel points in the unit scale of the scale according to the difference value of the adjacent first coordinate components in the first coordinate component list;
determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and a length unit conversion coefficient; the conversion coefficient of the length unit is the ratio of the length unit of inches to the physical length of the unit scale of the scale;
and the adjusting unit is used for adjusting the size of the image to be processed based on the image adjusting coefficient so as to adjust the number of pixel points per inch in the image to be processed to the number of target pixel points.
In a third aspect, there is provided an apparatus for DPI adjustment according to a scale, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method steps of any one of the above first aspects.
In a fourth aspect, a computer-readable storage medium is provided, having stored therein a computer program for execution by a processor of the method steps of any of the first aspects described above.
In a fifth aspect, a computer program product is provided, which, when invoked by a device for DPI adjustment according to a scale, causes the device to perform the method steps of any of the first aspects described above.
According to the method for adjusting the image DPI according to the scale, the number of pixels in the unit scale of the scale is determined by identifying the scale position of the scale in the image, so that the number of pixels in the unit length of the image to be processed is automatically adjusted to the number of target pixels, and the efficiency and accuracy of adjusting the image DPI are greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a method for adjusting a DPI according to a scale according to an embodiment of the present application;
FIG. 2A is a schematic diagram of an image of a scale region of a black-on-white scale provided in an embodiment of the present application;
FIG. 2B is a schematic view of another scale region image of a black-on-white scale provided in an embodiment of the present application;
fig. 2C is a schematic diagram of a pixel accumulation value curve according to an embodiment of the present disclosure;
fig. 2D is a schematic diagram of a target area image according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an apparatus for DPI adjustment of an image according to a scale according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for adjusting DPI according to a scale according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without any creative effort belong to the protection scope of the present application.
For convenience of understanding, terms referred to in the embodiments of the present application are explained below:
the image inversion process is to invert R, G, B values in the image. For example, the R, G, B channel at a point has a pixel value of (0,0,0) and a color inversion of (255 ).
The image mask processing is to perform dot product processing on a pre-made region-of-interest mask (a binary image, the pixel value in the region of interest is 1, and the pixel value outside the region of interest is 0) and an image to be processed to obtain a region-of-interest image, wherein the pixel value in the region of interest is kept unchanged, and the pixel values outside the region are all 0.
The Image thinning processing is an operation generally referred to as Skeletonization (Image Skeletonization) of an Image. Is the process of reducing the lines of an image from a multi-pixel width to a unit pixel width.
The method for adjusting the DPI of the image according to the scale can be applied to a server and can also be applied to a terminal. To ensure the accuracy of the adjustment, the Terminal may be a Mobile phone with strong computing power, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a User Equipment (UE) such as a tablet computer (PAD), a handheld device, a vehicle-mounted device, a wearable device, a computing device or other processing device connected to a wireless modem, a Mobile Station (MS), a Mobile Terminal (Mobile Terminal), etc. The server may be an application server or a cloud server.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a schematic flowchart of a method for adjusting a DPI according to a scale according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
and S101, acquiring a scale area image in the image to be processed.
The image to be processed comprises a scale and a shooting object. The shot object can be a fingerprint, a palm print, a footprint and the like, and in order to facilitate comparison between the shot object contained in the image to be processed and the shot object contained in the image of the background library, the number of pixel points in the unit length of the shot object contained in the image to be processed needs to be adjusted to the number of target pixel points, namely the DPI of the image to be processed is adjusted. In order to know the adjusted scale, it is necessary to include a scale as a reference object in the image to be processed including the subject. The scale comprises scale marks, and the scale area image is an image which is cut from the image to be processed and comprises the scale area.
The scale division area is rectangular, and four boundaries of the scale division area image are respectively parallel and vertical to the scale marks. Unlike the scale region, the scale region may only include the scale lines, the scale background (i.e., the region on the scale surface that is not the scale lines, nor the text or numbers), and the scale region may not include the numbers or characters corresponding to the scale lines. Where two borders perpendicular to the tick mark may be connected by the tick mark (fig. 2A), i.e. one end point of the same tick mark is located at one border perpendicular to the tick mark and the other end point is located at the other border perpendicular to the tick mark. The two boundaries perpendicular to the tick mark may also be unconnected by the tick mark. For example, one end point of the same tick mark is located at one boundary perpendicular to the tick mark, and the other end point is located short of the other boundary perpendicular to the tick mark; or neither end point of the tick mark is located at the border perpendicular to the tick mark, but inside the image (fig. 2B). It will be appreciated that the manner in which the two boundaries perpendicular to the tick marks are connected by the tick marks is preferred, with particular effects as set out below.
In specific implementation, the scale region image in the image to be processed can be directly output through the neural network model, and can also be obtained through further image processing according to the output result of the neural network model.
And S102, determining a pixel accumulated value corresponding to a first coordinate component of each pixel point of the scale area image on the first area boundary to obtain a pixel accumulated value curve.
The scale-division area image includes a first area boundary and a second area boundary. The first area boundary is a boundary which is vertical to the scale mark in the boundary of the scale area image; the second region boundary is a boundary parallel to the scale line among the boundaries of the scale division region image. The pixel coordinates of the pixel points in the scale area image comprise a first coordinate component (such as an x coordinate) and a second coordinate component (such as a y coordinate) which are obtained by taking the first area boundary and the second area boundary as coordinate axes. The first and second zone boundaries of the scale-scale zone image are shown in fig. 2B on a black-on-white scale.
The scale marks are perpendicular to the first area boundary, so that the scale marks corresponding to the first coordinate components and the scale marks not corresponding to the first coordinate components can be determined according to the accumulated value of the pixels corresponding to the first coordinate components.
The common scale has two scales of black-white scale and white-black scale. If the tick mark is brighter relative to the substrate (e.g., black and white scale), the accumulated value of the pixel corresponding to the first coordinate component with the tick mark is larger, and since the tick mark is narrow relative to the substrate, the first coordinate component with the tick mark corresponds to the peak value on the accumulated value of the pixel curve; if the tick mark is darker relative to the substrate (e.g., a black-white scale), the accumulated value of the pixel corresponding to the first coordinate component not corresponding to the tick mark is greater, and since the tick mark is narrow relative to the substrate, the first coordinate component corresponding to the tick mark corresponds to a valley on the accumulated value of the pixel curve. It is understood that, when two boundaries perpendicular to the ruled lines are connected by the ruled lines, the difference between the accumulated values of pixels corresponding to the first coordinate component corresponding to the ruled lines and the accumulated values of pixels corresponding to the first coordinate component not corresponding to the ruled lines is larger and easier to distinguish, and thus the manner in which the two boundaries perpendicular to the ruled lines are connected by the ruled lines is more preferable.
And for the pixel points on the first region boundary with the target coordinate component as the first coordinate component, the pixel accumulated value corresponding to the target coordinate component is the sum of the pixel values of all the pixel points in the scale region image with the target coordinate component as the first coordinate component. For example, the coordinates of each pixel point on the first area boundary are (x1,0), (x2,0), and … (xn,0) with the first area boundary as the x-axis and the second area boundary as the y-axis. Wherein x1, x2, …, xn are the first coordinate components. The accumulated value of pixels corresponding to the first coordinate component x1 is the accumulated value of pixel values of pixels in the scale-division area image with x1 as the first coordinate component. That is, if the pixels in the scale area image having x1 as the first coordinate component are (x1, y1), (x1, y2), (x1, y3), …, (x1, ym), respectively, the pixel accumulated value corresponding to the first coordinate component of x1 is the sum of the pixel values corresponding to (x1, y1), (x1, y2), (…), and (x1, ym).
Then, based on any one of the first coordinate components and the corresponding accumulated pixel value, a pixel accumulated value curve is constructed by taking the first coordinate component of each pixel point on the first region boundary as an abscissa and taking the accumulated pixel value corresponding to the first coordinate component of each pixel point on the first region boundary as an ordinate, where the abscissa in the pixel accumulated value curve shown in fig. 2C is the first coordinate component on the first region boundary of the scale division region image, and the ordinate is the accumulated pixel value corresponding to the first coordinate component.
Step S103, a first coordinate component list corresponding to an accumulated value peak value meeting a preset peak value condition on a pixel accumulated value curve is obtained, and/or a first coordinate component list corresponding to an accumulated value valley value meeting a preset valley value condition on the pixel accumulated value curve is obtained.
As described above, the first coordinate component corresponding to the graduation mark corresponds to the peak value or the valley value of the accumulated value, and the number of pixels between adjacent graduations can be calculated by the distance between adjacent peak values or valley values.
In order to improve the accuracy of peak/valley value identification and thus the accuracy of pixel number calculation between adjacent scales, the accumulated value peak values (or accumulated value valley values) on the pixel accumulated value curve can be filtered by using preset peak value conditions (or preset valley value conditions) so as to filter false accumulated value peak values (or false accumulated value valley values) generated by poor image quality and peak value (valley value) calculation errors. The preset peak (valley) condition may be a requirement of the size of the peak (valley), a requirement of the distance between the peak and the adjacent peak, or the peak and the adjacent valley, or the like. And then, determining first coordinate components corresponding to the filtered accumulated value peak values (accumulated value valley values), and forming a first coordinate component list by the first coordinate components corresponding to the accumulated value peak values (accumulated value valley values). Of course, the first coordinate component lists corresponding to the accumulated value peak value and the accumulated value valley value can be respectively calculated, and the more accurate one of the two is selected to be used for determining the number of the pixel points in the unit scale of the scale. And step S104, determining the number of pixel points in the unit scale of the scale according to the difference value of the adjacent first coordinate components in the first coordinate component list.
In a specific implementation manner, an average value corresponding to the difference value of the first coordinate components with adjacent sizes in the first coordinate component list is obtained, and the average value is determined as the number of pixel points in the unit scale of the scale.
When all the scale marks of the scale are correctly detected, the difference value of the adjacent first coordinate components is the number of pixels between the adjacent scale marks. The first coordinate components with adjacent sizes can be subjected to difference, and the average value of the obtained difference values is used as the distance between the adjacent scale marks. For example, the first coordinate components in the first coordinate component list are 1, 3.9, 7, and 10.1, respectively, the differences between the adjacent first coordinate components are 2.9, 3.1, and 3.1, respectively, and the average value of the obtained differences is 3.05, which is taken as the number of pixels between adjacent scale lines.
And S105, determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and the length unit conversion coefficient. Wherein the unit length conversion factor is the ratio of the unit length (e.g., inches) to the physical length of the unit scale of the scale. In specific implementation, the current number of pixels in unit length is obtained based on the product of the conversion coefficient of length unit and the number of pixels in unit scale of the scale.
The image adjustment factor is determined based on the ratio of the number of target pixels in a unit length (e.g., inches) to the number of current pixels in the unit length.
For example, if the number of target pixels in the unit length is 500 and the number of current pixels in the unit length is 1000, the image adjustment coefficient is 0.5.
And S106, adjusting the size of the image to be processed based on the image adjusting coefficient.
And based on the image adjusting coefficient, carrying out scaling on the sizes of the first region boundary and the second region boundary of the image to be processed in the same proportion so as to adjust the number of pixel points within unit length in the image to be processed to the number of target pixel points. For example, if the image adjustment coefficient is 0.5, the size of the width and height of the image to be processed needs to be reduced by 0.5 times at the same time.
According to the method for adjusting the image DPI according to the scale, the number of pixels in the unit scale of the scale is determined by identifying the scale position of the scale in the image, so that the number of pixels in the unit length of the image to be processed is automatically adjusted to the number of target pixels, and the efficiency and accuracy of adjusting the image DPI are greatly improved.
With respect to step S101, in a specific embodiment, the acquiring manner of the scale region image in the image to be processed may specifically include:
in the first mode, the image to be processed is input into the trained scale division model M1, the scale region information output by the scale division model M1 is obtained, and the scale region image in the image to be processed is determined according to the scale region information output by the scale division model M1.
In mode 1.1, the scale-division model M1 may output the scale-region information as a scale-region image (for example, the scale-division model directly detects the scale region and outputs a rectangular scale-region image whose boundary is parallel or perpendicular to the scale line), and determining the scale-region image in the image to be processed based on the scale-region information output by the scale-division model M1 includes directly using the scale-region information as the scale-region image in the image to be processed. In the mode 1.2, the scale division region information output by the scale division model M1 may be a mask of the scale division region (for example, the mask includes at least one connected region, and at least one of the connected regions is an approximately rectangular strip shape enclosing a scale), and at this time, according to the scale division region information output by the scale division model M1, the scale division region image in the image to be processed is determined to be image-processed based on the scale division region information output by the scale division model M1, so as to obtain the scale division region image in the image to be processed. The specific image processing process may be: the mask based on the scale-scaled region can determine at least one rectangular region (for example, find out the largest connected region in at least one connected region contained in the mask, and take the smallest external rectangle or the largest internal rectangle of the largest connected region as a rectangular region determined based on the mask; for example, find out the smallest external rectangle or the largest internal rectangle of each connected region contained in the mask as a rectangular region determined based on the mask), and crop the image to be processed according to the rectangular region to obtain the scale-scaled region image (when the rectangular region is multiple, multiple scale-scaled region images can be obtained).
The scale division model is trained as follows: a first image sample set containing a scale in an image is collected and labeled. Corresponding to mode 1.1, a rectangular area containing scales is marked; corresponding to the mode 1.2, a rectangular or approximately rectangular area containing a scale is marked. In order to improve the accuracy of the trained model, image transformation processing such as rotation and scaling may be performed on each image sample in the first image sample set, so as to increase the number of image samples in the first image sample set. Then, a preset full convolution neural network (FCN) is iteratively trained by using the first image sample set, so as to obtain a scale segmentation model M1. The preset full convolution neural network is based on the FCN and adopts a multi-layer hrnet18 structure as a backbone network.
Inputting the image to be processed into the scale segmentation model, and determining a scale region image according to scale region information output by the scale segmentation model; and carrying out scale identification processing on the scale area image to obtain a scale area image.
The scale region image refers to an image including a scale region, and including no or only a small number of non-scale regions (the non-scale regions refer to regions other than the scale in the image to be processed) in the image to be processed.
In mode 2.1, the scale region information output by the scale division model M2 may be a scale region image (for example, the scale region is directly detected by the scale division model M2, and a rectangular scale region image is output), and when determining the scale region image in the image to be processed based on the scale region information output by the scale division model M2 includes directly using the scale region information as the scale region image in the image to be processed. In the mode 2.2, the scale region information output by the scale division model M2 may be a mask of a scale region (for example, the mask includes at least one connected region, and at least one of the connected regions is a rectangular strip shape that frames the scale region), and at this time, the scale region image in the image to be processed is determined to be image-processed based on the scale region information output by the scale division model M2 according to the scale region information output by the scale division model M2, so as to obtain the scale region image in the image to be processed. The specific image processing process may be: the mask based on the scale region can determine at least one rectangular region (for example, find out the largest connected region in at least one connected region contained in the mask, and take the smallest circumscribed rectangle of the largest connected region as a rectangular region determined based on the mask; for example, find out the smallest circumscribed rectangle of each connected region contained in the mask as a rectangular region determined based on the mask), and crop the image to be processed according to the rectangular region to obtain the scale region image (when the rectangular region is multiple, multiple scale region images can be obtained). Then, it is necessary to perform scale recognition processing on the obtained scale region image to obtain a scale region image.
The scale segmentation model is trained as follows: firstly, a second image sample set containing a scale in an image is collected, and a rectangular area of the scale in the second image sample set is marked. In order to improve the accuracy of the trained model, image conversion processing such as rotation and scaling may be performed on each image sample in the second image sample set, so as to increase the number of image samples in the second image sample set. Then, a second image sample set is adopted to perform iterative training on a preset full convolution neural network (FCN) to obtain a scale segmentation model M2. The preset full convolution neural network is based on the FCN and adopts a multi-layer hrnet18 structure as a backbone network.
Therefore, the scale division model M1 can directly obtain the scale division region image in one step, and the scale division model M2 needs scale recognition processing after obtaining the scale division region image to obtain the scale division region image, so that the scale division model M1 is superior to the scale division model M2 in determining the speed of the scale division region image; however, the scale division model M2 can detect the entire region of the scale, whereas the scale division model M1 can detect only the scale region of the scale, so the scale division model M2 is superior to the scale division model M1 in terms of detection range and detection stability.
In a specific embodiment, in the second aspect, in the process of performing scale recognition on the obtained scale region image, it is considered that the scale region image with black white scales is susceptible to errors caused by illumination and the like when scale recognition is performed, so before performing scale recognition on the scale region image, it is necessary to confirm whether the scale region image is the scale region image with white black scales again, and if not, it is necessary to perform image reverse color processing on the scale region image to obtain the scale region image with white black scales, and then perform scale recognition based on the scale region image with white black scales to improve the accuracy of scale reading.
Specifically, an average value of pixel values of all pixel points in the image of the scale area is obtained; if the average value is not less than the preset pixel threshold value, if 120, the image of the scale area is the scale image with white black scales, so that the image of the scale area is subjected to image color reversal processing to obtain an image subjected to color reversal processing; determining the image after the reverse color processing as a new scale area image; and scale recognition processing is performed on the new scale region image. If the average value is smaller than the preset pixel threshold value, if 120, it indicates that the scale region image is a scale image with black and white scales, so that scale identification processing can be directly performed on the scale region image.
By the image reverse color processing on the scale image with the white black scales, the influence of factors such as illumination on scale identification can be reduced, the scale identification accuracy is improved, and the scale reading accuracy can be improved.
Based on the foregoing embodiment, in a specific implementation manner, the step of performing scale recognition processing on the scale region image to obtain the scale region image may specifically include:
s1, carrying out image thinning processing according to the scale region image to obtain a first image corresponding to the scale region image;
in a specific embodiment, the scale region image may be converted into a binarized image (in this way, the scale lines included in the scale region and the numbers corresponding to the scale lines are in the foreground region in the binarized image), and then the binarized image may be subjected to thinning processing to obtain the first image. Compared with the binary image converted from the scale area image, the pixel width occupied by the foreground area in the first image becomes thin.
S2, performing straight line detection processing on the first image to obtain pixel positions at two end points of each line segment in the first image;
the scale mark is a plurality of parallel line segments, and in order to identify the scale mark, the first image may be subjected to straight line detection processing to obtain pixel positions at two end points of each line segment in the first image. And the pixel position at the endpoint is the position determined by the first coordinate component and the second coordinate component of the endpoint. The line detection processing may be hough line detection processing or other prior art line detection processing. The hough line detection processing process is an existing image processing technology, and is not described herein in detail.
And S3, performing parallel line detection processing on each line segment based on the pixel position at the end point of each line segment to obtain a parallel line segment meeting the preset parallel line segment condition.
It is to be understood that the line segment in the scale includes a graduation line and a line segment in the number corresponding to the graduation line, and the line segment detected in S2 may include a graduation line, a line segment in the number, etc., wherein the graduation lines are parallel to each other, and the line segments other than the graduation lines may be parallel or non-parallel to each other. Therefore, the line segment direction of each line segment can be counted, and the direction with the largest number of corresponding line segments is the scale mark direction. And taking the parallel line segments in the direction of the scale marks as the parallel line segments meeting the preset parallel line segment condition.
Due to errors caused by the steps of binarization, thinning, line detection and the like, the directions of the detected line segments corresponding to the actually mutually parallel scale marks may not be completely consistent. For this purpose, the 0-180 degree is divided into W (for example, 32) direction sections, the direction section with the largest number of line segments is counted, and the line segments falling in the direction section and the direction sections adjacent to the direction section are taken as the parallel line segments satisfying the preset parallel line segment condition. For example, if the direction section in which the line segment falls most is the 28 th direction section, the line segments whose line segment directions fall in the 27 th, 28 th, and 29 th direction sections are all regarded as parallel line segments.
The above-described embodiments of parallel line detection for each line segment can filter out line segments that are not parallel to the tick mark, e.g., the horizontal line segment in numeral 7 is a line segment that is not parallel to the tick mark.
And S4, obtaining a scale area image according to the pixel positions at the two end points of the parallel line segment meeting the preset parallel line segment condition.
Although S3 may filter out line segments that are not parallel to the tick mark, line segments that are parallel to the tick mark contained in the number corresponding to the tick mark cannot be filtered, such as longitudinal line segments in the numbers 4 or 10. Therefore, it is desirable to further remove the numbers and letters in the scale and to retain the background area between the graduation marks and the adjacent graduation marks in S4. According to the pixel positions of the two end points of the parallel line segment meeting the preset parallel line segment condition, a cutting area containing scale marks and excluding numbers and characters can be determined, and therefore the scale area image of the scale is cut.
Based on the foregoing embodiment, in a specific implementation manner, the step of obtaining the scale area image according to pixel positions at two end points of the parallel line segment that satisfy the preset parallel line segment condition may specifically include:
and S4.1, determining a target area image in the first image according to the pixel positions of the two end points of the parallel line segment meeting the preset parallel line segment condition.
And determining a minimum surrounding rectangular frame surrounding the parallel line segment meeting the preset parallel line segment condition according to the pixel positions at two end points of the parallel line segment meeting the preset parallel line segment condition, and cutting out the part where the rectangular frame is located in the first image to obtain the target area image. It will be appreciated that the boundaries surrounding the minimum bounding rectangular box are generally parallel or perpendicular to the parallel line segments, i.e. the boundaries of the target area image are parallel or perpendicular to the parallel line segments, and we refer to the boundaries of the target area boundaries that are perpendicular to the parallel line segments as the first target area boundary and the second target area boundary.
S4.2, acquiring a first average distance between each parallel line segment in the target area image and the boundary of the first target area, and comprising the following steps:
step a: acquiring a first distance between each parallel line segment in the target area image and a first target area boundary;
for example, the smaller of the distances from the two end points of each parallel line segment to the first target area boundary may be taken as the first distance from the first target area boundary.
For example, the distance between the midpoint of the two endpoints of each parallel line segment and the boundary of the first target region may be taken as the first distance between the parallel line segment and the boundary of the first target region.
B, calculating the average value of the first distances corresponding to the parallel line segments to obtain a first average distance;
s4.3, acquiring a second average distance between each parallel line segment in the target area image and the boundary of a second target area;
this step is similar to S4.2 and will not be described again.
And S4.4, determining the boundary of the target area corresponding to the smaller average distance in the first average distance and the second average distance as an area reserved boundary.
And S4.5, from the region reserved boundary, taking the target vertical length vertical to the region reserved boundary as the width, and performing image cutting processing on the target region image to obtain a scale region image.
It should be noted that the target vertical length is a preset length that can be reduced by the number. The target vertical length may be specifically a sum of the smaller average distance and an average length of each parallel line segment (for example, when the average distance is calculated according to the line segment end point and the area boundary) or a sum of the smaller average distance and a half of the average length of each parallel line segment (for example, when the average distance is calculated according to the line segment middle line and the area boundary), which is not limited herein.
As shown in fig. 2D, as an example that the target area image includes a number 4, and a longitudinal line segment in the number 4 is parallel to a line segment of the scale mark, the parallel line segment satisfying the preset parallel line segment condition includes the scale mark, and also includes the longitudinal line segment in the number 4. At this time, distances d1 and d2 between the pixel position at the segment center of the parallel segment corresponding to the scale mark and the two region boundaries L1 and L2, respectively, distances d4 and d5 between the pixel position at the segment center of the parallel segment corresponding to the longitudinal segment in the number and the two region boundaries L1 and L2, respectively, can be obtained, since the number of the parallel segment corresponding to the scale mark is greater than the number of the parallel segment corresponding to the number, the average distance from each parallel segment to L1 is smaller than the second average distance to L2, L1 is the boundary closer to the scale mark in the scale, L2 is the boundary closer to the number in the scale, the part where the number can be cut is cut from L1, and the part where the scale mark is located is reserved.
L1 is a region preserving boundary, the length of d3 is extended from the region preserving boundary in a direction perpendicular to the region preserving boundary as a width, and the target region image is subjected to image cropping processing, so that the scale region image is an image surrounded by region boundaries L1 and d 3. d3 is the sum of the average distance from each parallel line segment to L1 and the average length of each parallel line segment.
In a specific embodiment, in the mode 2.2 of obtaining a scale region image in an image to be processed, when scale region information output by a scale segmentation model is a scale region mask, the step of determining the scale region image according to the scale region information output by the scale segmentation model may specifically include: determining a circumscribed rectangle of the image to be processed through a proportional scale region mask; and cutting the image to be processed according to the circumscribed rectangle to obtain a scale area image.
Further, the step of performing image refinement processing according to the scale region image to obtain the first image corresponding to the scale region image may specifically include:
carrying out image binarization processing and dot multiplication processing with a proportional scale area mask according to the proportional scale area image to obtain a second image; specifically, firstly, carrying out image binarization processing on the scale region image, and then carrying out dot product processing on a binarization result and a scale region mask to obtain a second image; or the dot multiplication processing is firstly carried out on the scale region image and the scale region mask, and then the image binarization processing is carried out on the dot multiplication processing result to obtain a second image.
It can be understood that when the scale region image passes through the scale region mask, the circumscribed rectangle of the image to be processed is determined; and the image to be processed is cut according to the circumscribed rectangle, because the image of the scale area is cut according to the circumscribed rectangle, the image of the scale area may not only comprise the scale area, but also comprise a non-scale area (the mask area of the scale area can be considered to only comprise the scale area, but the circumscribed rectangle of the mask area may comprise the non-scale area), the image of the non-scale area may comprise a straight line segment, which affects the accuracy of the subsequent processing, at the moment, the non-scale area can be cut off through the dot multiplication processing of the mask of the scale area, the second image is ensured not to contain the non-scale area, and the influence of the non-scale area on the scale identification is avoided.
And then, carrying out image thinning processing on the second image to obtain a first image corresponding to the scale region image.
Based on any of the above embodiments, with respect to step S103, the preset peak condition may include at least one of a first condition that is not lower than a peak average value, a second condition that is not lower than a preset peak fall value, and a third condition that a distance value between adjacent accumulated value peaks is not lower than a preset peak distance value. The preset peak value fall value is a minimum peak-to-valley distance value between any accumulated value peak value and an adjacent valley value; the preset peak value interval value is the minimum interval value between adjacent accumulated value peak values.
Or the preset valley condition comprises at least one of a fourth condition that the valley value is not higher than the valley average value, a fifth condition that the valley value is not lower than the preset peak value drop value and a sixth condition that the distance value between the valley values of the adjacent accumulated values is not lower than the preset valley value distance value; the preset valley value is the minimum peak-valley distance value between any accumulated value valley value and adjacent peak values; the preset valley value interval value is the minimum interval value between adjacent accumulated value valleys.
In one example, taking the first coordinate component list corresponding to the accumulated value peak satisfying the preset peak condition as an example, obtaining the correlation data of each accumulated value peak on the pixel accumulated value curve includes:
acquiring a peak value average value corresponding to each accumulated value peak value on a pixel accumulated value curve; acquiring a peak-to-valley distance value between each accumulated value peak value and an adjacent valley value on a pixel accumulated value curve; acquiring a peak value distance value between adjacent accumulated value peak values on a pixel accumulated value curve;
and searching for target accumulated value peak values which simultaneously meet a first condition, a second condition and/or a third condition in preset peak value conditions in the accumulated value peak values on the pixel accumulated value curve based on the peak value average value, the peak-to-valley distance value and the peak value distance value corresponding to each accumulated value peak value on the pixel accumulated value curve. That is, if the peak average value, the peak-to-valley distance value, and/or the peak distance value corresponding to any accumulated value peak value cannot satisfy the preset peak condition at the same time, the accumulated value peak value needs to be ignored.
Similarly, the manner of acquiring the first coordinate component list corresponding to the accumulated value valley meeting the preset valley condition is similar to the above-mentioned manner, and is not described herein again.
Based on any of the above embodiments, the existing scale images are typically a scale image of white black scale and a scale image of black white scale. For a scale image with black and white scales, the peak value of the accumulated value appears in a white area with high brightness, and the white bottom area is much wider than the scale area, so that the position of the peak value of the accumulated value in the white area has a large uncertain shift, and then determining the first coordinate component list according to the peak value of the accumulated value causes inaccurate results (as the difference value of the first coordinate components adjacent in size in the first coordinate component list is unstable, and the variance of each difference value is large, for example, the first coordinate component in the first coordinate component list is 1, 4, 9, 10, 14). The first coordinate component list may be determined by the accumulation value valley value at this time, or may be determined from the inverse color pixel accumulation value curve.
Therefore, under the condition that whether the scale contained in the current image to be processed is the black-bottom white scale or the white-bottom scale is uncertain, the first coordinate component list can be determined through the accumulated value peak value on the pixel accumulated value curve, the first coordinate component reverse color list can be determined through the accumulated value peak value on the reverse color pixel accumulated value curve, the variances corresponding to the differences of the two lists are respectively calculated, and the detection of the scale mark is more accurate in the first coordinate component list corresponding to the small variance of the differences. Alternatively, under the condition that whether the scale included in the current image to be processed is the black-bottom white scale or the white-bottom scale is uncertain, the first coordinate component list corresponding to the peak value of the accumulated value can be determined through the peak value of the accumulated value on the pixel accumulated value curve, the first coordinate component list corresponding to the valley value of the accumulated value can be determined through the valley value of the accumulated value on the pixel accumulated value curve, the variances corresponding to the differences of the two lists are respectively calculated, and the detection of the scale mark is more accurate in the first coordinate component list corresponding to the small variance of the differences.
Hereinafter, taking the example of determining the first coordinate component reverse color list by using the reverse color pixel accumulated value curve as an example, the description of determining the first coordinate component list corresponding to the accumulated value valley value by using the accumulated value valley value on the pixel accumulated value curve is omitted.
In specific implementation, the obtaining of the inverse color accumulated value curve corresponding to the image to be processed at least comprises the following steps:
in a first mode, before step S102, the scale region image is subjected to a reverse color processing to obtain a scale region image subjected to the reverse color processing, and then step S102 is performed on the scale region image subjected to the reverse color processing to obtain a reverse color accumulated value curve corresponding to the image to be processed.
In a second mode, after the pixel accumulated value curve is obtained in step S102, the inverse color processing may be performed on the points on the pixel accumulated value curve to obtain an inverse color accumulated value curve corresponding to the image to be processed, that is, the inverse color accumulated value curve may be obtained by performing the inverse color processing according to the pixel accumulated value curve.
After the inverse color accumulated value curve is obtained, a first coordinate component inverse color list corresponding to an accumulated value peak value satisfying a preset peak value condition on the inverse color pixel accumulated value curve may be obtained (for example, the ordinate of a point on the pixel accumulated value curve is directly subtracted from 255 as the ordinate of a point on the inverse color pixel value curve corresponding to the same abscissa).
Accordingly, step S104 includes:
acquiring a first variance corresponding to the difference value of the first coordinate components adjacent in size in the first coordinate component list and a second variance corresponding to the difference value of the first coordinate components adjacent in size in the first coordinate component reversed list; and determining the number of pixel points in the unit scale of the scale according to the difference value of the first coordinate component corresponding to the smaller variance in the first variance and the second variance.
In another specific embodiment, step S104 includes determining a difference value satisfying a preset difference value condition from the obtained difference values of the adjacent first coordinate components in size as a target difference value, and determining a difference value average value corresponding to the target difference value as the number of pixel points in the unit scale of the scale.
Due to reflection, blurring and the like, a situation that an individual scale mark of the scale is not detected may occur, and at this time, a difference value of first coordinate components adjacent in size may be a multiple of the number of pixels between adjacent scale marks, and at this time, it is not accurate to directly take an average value of the difference values as the number of pixels between adjacent scale marks. For this reason, a preset difference condition is used to filter an inaccurate difference before calculating the average value of the differences. The preset difference condition can be set by user according to actual conditions, and the application is not limited herein. For example, the preset difference condition may also be a non-maximum difference value, a non-minimum difference value, i.e. removing the maximum and minimum difference values. For example, the preset difference condition may be a condition that is greater than a first difference threshold and less than a second difference threshold, where the first difference threshold is less than the second difference threshold. In a specific example, a quarter-quartile Q1 and a three-quartile Q3 of the difference distances are calculated, and the second difference threshold up _ bound is Q3+ a (Q3-Q1+ b); the first difference threshold bottom _ bound ═ Q3-a (Q3-Q1+ b), where a, b are constants determined by experiment.
It can be understood that, when the first coordinate components obtained in different manners are calculated (different manners mean that the first coordinate component corresponding to the pixel accumulated value curve and the first coordinate component difference corresponding to the color reversal pixel accumulated value curve are calculated, or the first coordinate component corresponding to the accumulated value peak of the pixel accumulated value curve and the first coordinate component corresponding to the accumulated value valley of the pixel accumulated value curve are calculated), the difference obtained in step S104 is determined according to the first coordinate component in the list corresponding to the smaller variance.
Corresponding to the foregoing method, an embodiment of the present application further provides an apparatus for adjusting DPI according to a scale, where as shown in fig. 3, the image processing apparatus includes: an acquisition unit 310, a determination unit 320, and an adjustment unit 330;
an obtaining unit 310, configured to obtain a scale division area image in an image to be processed; the scale area of the scale is rectangular;
the determining unit 320 is configured to determine a pixel accumulated value corresponding to a first coordinate component of each pixel point on a first region boundary of the scale region image, so as to obtain a pixel accumulated value curve; for any target coordinate component in the first coordinate components, the corresponding pixel accumulated value is the sum of pixel values of all pixel points taking the target coordinate component as the first coordinate component in the scale region image; the point on the pixel accumulated value curve takes the first coordinate component of each pixel point on the first region boundary as the abscissa, and takes the pixel accumulated value corresponding to the first coordinate component of each pixel point on the first region boundary as the ordinate; the first area boundary is a boundary which is vertical to the scale mark in the boundary of the scale area image;
the obtaining unit 310 is further configured to obtain a first coordinate component list corresponding to an accumulated value peak value meeting a preset peak value condition on the pixel accumulated value curve; or acquiring a first coordinate component list corresponding to an accumulated value valley value meeting a preset valley value condition on the pixel accumulated value curve;
the determining unit 320 is further configured to determine the number of pixel points in the unit scale of the scale according to a difference between the first coordinate components with adjacent sizes in the first coordinate component list;
determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and the length unit conversion coefficient; the length unit conversion coefficient is the ratio of the unit length to the physical length of the unit scale of the scale;
the adjusting unit 330 is configured to adjust the size of the image to be processed based on the image adjusting coefficient, so as to adjust the number of pixels within a unit length in the image to be processed to the number of target pixels.
The functions of the functional units of the image processing apparatus provided in the foregoing embodiments of the present application may be implemented by the foregoing method steps, and therefore, detailed working processes and beneficial effects of the units in the image processing apparatus provided in the embodiments of the present application are not repeated herein.
An apparatus 130 for adjusting DPI according to a scale according to this embodiment of the present application is described below with reference to fig. 4. As shown in fig. 4, the apparatus 130 for DPI adjustment of an image according to a scale is embodied in the form of a general purpose computing device. The components of the apparatus 130 for image DPI scaling may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that connects the various system components (including the memory 132 and the processor 131).
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The apparatus 130 for scaling DPI adjustment may also be in communication with one or more external devices 134 (e.g., a keyboard, a pointing device, etc.) and/or any device (e.g., a router, a modem, etc.) that enables the apparatus 130 for scaling DPI adjustment to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the device 130 for image DPI adjustment according to scale may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via a network adapter 136. As shown in fig. 4, the network adapter 136 communicates over the bus 133 with the other modules of the device 130 for DPI adjustment of an image according to scale. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the device 130 for DPI adjustment according to scale, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In some possible embodiments, the aspects of the image processing method provided by the present application may also be implemented in the form of a program product including a computer program for causing a computer device to perform the steps of the image processing method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device, for example, the above-described device may perform the method steps in fig. 1.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for image processing of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include a computer program, and may travel on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with a readable computer program embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer program embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer programs for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer program may execute entirely on the target object computing device, partly on the target object apparatus, as a stand-alone software package, partly on the target object computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the target object computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having a computer-usable computer program embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (13)

1. A method for DPI adjustment of an image according to a scale, the method comprising:
acquiring a scale area image in an image to be processed; the scale area of the scale is rectangular;
determining a pixel accumulated value corresponding to a first coordinate component of each pixel point of the scale area image on a first area boundary to obtain a pixel accumulated value curve; for any target coordinate component in the first coordinate components, the corresponding pixel accumulated value is the sum of pixel values of all pixel points taking the target coordinate component as the first coordinate component in the scale region image; the point on the pixel accumulated value curve takes the first coordinate component of each pixel point on the first region boundary as the abscissa, and takes the pixel accumulated value corresponding to the first coordinate component of each pixel point on the first region boundary as the ordinate; the first area boundary is a boundary which is vertical to the scale mark in the boundary of the scale area image;
acquiring a first coordinate component list corresponding to an accumulated value peak value meeting a preset peak value condition on the pixel accumulated value curve; or acquiring a first coordinate component list corresponding to an accumulated value valley value meeting a preset valley value condition on the pixel accumulated value curve;
determining the number of pixel points in the unit scale of the scale according to the difference value of the first coordinate components with adjacent sizes in the first coordinate component list;
determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and a length unit conversion coefficient; the length unit conversion coefficient is the ratio of the unit length to the physical length of the unit scale of the scale;
and adjusting the size of the image to be processed based on the image adjustment coefficient so as to adjust the number of pixel points within unit length in the image to be processed to the number of target pixel points.
2. The method of claim 1, wherein prior to acquiring the scale-scale region image in the image to be processed, the method further comprises:
acquiring a first image sample set marked with a scale area of a scale;
performing iterative training on a first initial convolutional neural network by using the first image sample set to obtain a scale segmentation model;
acquiring a scale region image in an image to be processed, comprising:
and inputting the image to be processed into the scale division model, and determining the scale region image in the image to be processed according to the scale region information output by the scale division model.
3. The method of claim 1, wherein prior to acquiring the scale-scale region image in the image to be processed, the method further comprises:
acquiring a second image sample set marked with a scale area;
performing iterative training on a second initial convolutional neural network by adopting the second image sample set to obtain a scale segmentation model;
acquiring a scale region image in an image to be processed, comprising:
inputting an image to be processed into the scale segmentation model, and determining a scale region image according to scale region information output by the scale segmentation model;
and carrying out scale identification processing on the scale region image to obtain a scale region image.
4. The method of claim 3, wherein prior to performing the scale-region image with scale-recognition processing to obtain a scale-region image, the method further comprises:
if the average value of the pixel values of all the pixel points in the scale area image is not smaller than a preset pixel threshold value, performing image reverse color processing on the scale area image to obtain an image after the reverse color processing;
carrying out scale identification processing on the scale region image to obtain a scale region image, comprising:
determining the image after the reverse color processing as a new scale area image;
and carrying out scale identification processing on the new scale area image to obtain a scale area image.
5. The method of claim 3 or 4, wherein performing scale recognition processing on the scale region image to obtain a scale region image comprises:
performing image thinning processing according to the scale region image to obtain a first image corresponding to the scale region image;
performing straight line detection processing on the first image to obtain pixel positions at two end points of each line segment in the first image;
based on the pixel position of the end point of each line segment, performing parallel line detection processing on each line segment to obtain a parallel line segment meeting a preset parallel line segment condition;
and obtaining a scale region image of the scale according to the pixel positions at the two end points of the parallel line segment meeting the preset parallel line segment condition.
6. The method of claim 5, wherein obtaining a scale area image from pixel positions at two end points of the parallel line segment satisfying the preset parallel line segment condition comprises:
determining a target area image in the first image according to the pixel positions at two end points of the parallel line segment meeting the preset parallel line segment condition, wherein the target area image comprises the parallel line segment meeting the preset parallel line segment condition;
acquiring a first average distance between each parallel line segment in the target area image and a first target area boundary;
acquiring a second average distance between each parallel line segment in the target area image and a second target area boundary; the first target area boundary and the second target area boundary are boundaries of the target area boundaries perpendicular to the parallel line segments;
determining a target area boundary corresponding to the smaller average distance of the first average distance and the second average distance as an area reserved boundary;
and from the region reserved boundary, carrying out image cutting processing on the target region image by taking the target vertical length vertical to the region reserved boundary as the width to obtain a scale region image.
7. The method according to claim 5 or 6, wherein the scale region information output by the scale division model is a scale region mask;
determining a scale region image according to scale region information output by the scale segmentation model, wherein the scale region image comprises:
determining the circumscribed rectangle of the image to be processed through the proportional scale region mask;
according to the circumscribed rectangle, cutting the image to be processed to obtain a scale area image;
performing image thinning processing according to the scale region image to obtain a first image corresponding to the scale region image, including:
carrying out image binarization processing and dot multiplication processing of the proportional scale area mask according to the proportional scale area image to obtain a second image;
and performing image thinning processing on the second image to obtain a first image corresponding to the scale region image.
8. The method according to any one of claims 1 to 7, wherein the preset peak value condition includes at least one of a first condition that is not lower than a peak average value, a second condition that is not lower than a preset peak fall value, and a third condition that a pitch value between adjacent accumulated value peak values is not lower than a preset peak pitch value; the preset peak value falling difference value is a minimum peak-to-valley distance value between the accumulated value peak value and the adjacent valley value; the preset peak value distance value is the minimum distance value between adjacent accumulated value peak values; alternatively, the first and second electrodes may be,
the preset valley condition comprises at least one of a fourth condition that the valley value is not higher than the valley average value, a fifth condition that the peak value is not lower than a preset peak value drop value and a sixth condition that the distance value between adjacent accumulated value valleys is not lower than a preset valley value distance value; the preset valley value is the minimum peak-valley distance value between the accumulated value valley value and the adjacent peak value; and the preset valley value interval value is the minimum interval value between the valley values of the adjacent accumulated values.
9. The method of any one of claims 1-8, further comprising:
acquiring an inverse color accumulated value curve, wherein the inverse color accumulated value curve is obtained according to the image subjected to inverse color processing on the image to be processed, or the inverse color accumulated value curve is obtained after the inverse color processing is performed on the pixel accumulated value curve;
acquiring a first coordinate component reverse color list corresponding to an accumulated value peak value meeting a preset peak value condition on the reverse color pixel accumulated value curve; or acquiring a first coordinate component reverse color list corresponding to an accumulated value valley value meeting a preset valley value condition on the reverse color pixel accumulated value curve;
determining the number of pixel points in the unit scale of the scale according to the difference value of the first coordinate components with adjacent sizes in the first coordinate component list, and the method comprises the following steps:
acquiring a first variance corresponding to the difference value of the first coordinate components adjacent in size in the first coordinate component list and a second variance corresponding to the difference value of the first coordinate components adjacent in size in the first coordinate component reversed list;
and determining the number of pixel points in the unit scale of the scale according to the difference value of the first coordinate component corresponding to the smaller variance in the first variance and the second variance.
10. The method according to any one of claims 1 to 9, wherein determining the number of pixels in the unit scale of the scale according to the difference between the first coordinate components adjacent in size in the first coordinate component list comprises:
determining a target difference value according to a difference value which meets a preset difference value condition in the obtained difference values, wherein the preset difference value condition is a condition that the preset difference value is larger than a first difference value threshold value and smaller than a second difference value threshold value, and the first difference value threshold value is smaller than the second difference value threshold value;
and determining the average value of the difference values corresponding to the target difference values as the number of pixel points in the unit scale of the scale.
11. An apparatus for DPI adjustment according to a scale, the apparatus comprising:
the acquisition unit is used for acquiring a scale area image in the image to be processed; the scale area of the scale is rectangular;
the determining unit is used for determining a pixel accumulated value corresponding to a first coordinate component of each pixel point of the scale area image on a first area boundary to obtain a pixel accumulated value curve; for any target coordinate component in the first coordinate components, the corresponding pixel accumulated value is the sum of pixel values of all pixel points taking the target coordinate component as the first coordinate component in the scale region image; the point on the pixel accumulated value curve takes the first coordinate component of each pixel point on the first region boundary as the abscissa, and takes the pixel accumulated value corresponding to the first coordinate component of each pixel point on the first region boundary as the ordinate; the first area boundary is a boundary which is vertical to the scale mark in the boundary of the scale area image;
the acquisition unit is also used for acquiring a first coordinate component list corresponding to an accumulated value peak value meeting a preset peak value condition on the pixel accumulated value curve;
the determining unit is further configured to determine the number of pixel points in the unit scale of the scale according to the difference between the first coordinate components with adjacent sizes in the first coordinate component list;
determining an image adjusting coefficient based on the number of target pixel points in unit length, the number of pixel points in unit scale of the scale and the length unit conversion coefficient; the length unit conversion coefficient is the ratio of the unit length to the physical length of the unit scale of the scale;
and the adjusting unit is used for adjusting the size of the image to be processed based on the image adjusting coefficient so as to adjust the number of pixel points within the unit length in the image to be processed to the number of target pixel points.
12. An apparatus for DPI adjustment of an image according to a scale, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 10.
13. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 10.
CN202210089852.7A 2022-01-25 2022-01-25 Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale Active CN114494017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210089852.7A CN114494017B (en) 2022-01-25 2022-01-25 Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210089852.7A CN114494017B (en) 2022-01-25 2022-01-25 Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale

Publications (2)

Publication Number Publication Date
CN114494017A true CN114494017A (en) 2022-05-13
CN114494017B CN114494017B (en) 2023-04-07

Family

ID=81474146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210089852.7A Active CN114494017B (en) 2022-01-25 2022-01-25 Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale

Country Status (1)

Country Link
CN (1) CN114494017B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862856A (en) * 2022-07-07 2022-08-05 成都数之联科技股份有限公司 Panel defect area identification method and device, electronic equipment and medium
CN115239789A (en) * 2022-05-23 2022-10-25 华院计算技术(上海)股份有限公司 Method and device for determining liquid volume, storage medium and terminal
CN115546208A (en) * 2022-11-25 2022-12-30 浙江托普云农科技股份有限公司 Method and device for measuring plant height of field crops and application
CN117555892A (en) * 2024-01-10 2024-02-13 江苏省生态环境大数据有限公司 Atmospheric pollutant multimode fusion accounting model post-treatment method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001313814A (en) * 2000-04-28 2001-11-09 Canon Inc Image processor, image-processing method and storage medium
US6636216B1 (en) * 1997-07-15 2003-10-21 Silverbrook Research Pty Ltd Digital image warping system
US20050069179A1 (en) * 2003-08-07 2005-03-31 Kyungtae Hwang Statistical quality assessment of fingerprints
TW200807330A (en) * 2006-07-28 2008-02-01 Via Tech Inc Weight-adjusted apparatus and method thereof
CN101221654A (en) * 2007-01-08 2008-07-16 北京书生国际信息技术有限公司 Electric seal weakening method
CN103914696A (en) * 2014-03-27 2014-07-09 大连恒锐科技股份有限公司 Image binaryzation method
CN106383689A (en) * 2016-09-20 2017-02-08 青岛海信电器股份有限公司 Display font size adjustment method and apparatus, and terminal device
CN109029203A (en) * 2018-08-31 2018-12-18 昆明理工大学 A kind of semi-automatic measuring dimension of object device based on Digital Image Processing
CN109146768A (en) * 2017-01-03 2019-01-04 成都科创知识产权研究所 image conversion method, system and application
CN109373897A (en) * 2018-11-16 2019-02-22 广州市九州旗建筑科技有限公司 A kind of measurement method based on laser virtual ruler
CN109376518A (en) * 2018-10-18 2019-02-22 深圳壹账通智能科技有限公司 Privacy leakage method and relevant device are prevented based on recognition of face
CN111626280A (en) * 2020-04-13 2020-09-04 北京邮电大学 Method and device for identifying answer sheet without positioning point
CN111899237A (en) * 2020-07-27 2020-11-06 长沙大端信息科技有限公司 Scale precision measuring method, scale precision measuring device, computer equipment and storage medium
CN112508793A (en) * 2020-12-22 2021-03-16 深圳开立生物医疗科技股份有限公司 Image scaling method and device, electronic equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6636216B1 (en) * 1997-07-15 2003-10-21 Silverbrook Research Pty Ltd Digital image warping system
JP2001313814A (en) * 2000-04-28 2001-11-09 Canon Inc Image processor, image-processing method and storage medium
US20050069179A1 (en) * 2003-08-07 2005-03-31 Kyungtae Hwang Statistical quality assessment of fingerprints
TW200807330A (en) * 2006-07-28 2008-02-01 Via Tech Inc Weight-adjusted apparatus and method thereof
CN101221654A (en) * 2007-01-08 2008-07-16 北京书生国际信息技术有限公司 Electric seal weakening method
CN103914696A (en) * 2014-03-27 2014-07-09 大连恒锐科技股份有限公司 Image binaryzation method
CN106383689A (en) * 2016-09-20 2017-02-08 青岛海信电器股份有限公司 Display font size adjustment method and apparatus, and terminal device
CN109146768A (en) * 2017-01-03 2019-01-04 成都科创知识产权研究所 image conversion method, system and application
CN109029203A (en) * 2018-08-31 2018-12-18 昆明理工大学 A kind of semi-automatic measuring dimension of object device based on Digital Image Processing
CN109376518A (en) * 2018-10-18 2019-02-22 深圳壹账通智能科技有限公司 Privacy leakage method and relevant device are prevented based on recognition of face
CN109373897A (en) * 2018-11-16 2019-02-22 广州市九州旗建筑科技有限公司 A kind of measurement method based on laser virtual ruler
CN111626280A (en) * 2020-04-13 2020-09-04 北京邮电大学 Method and device for identifying answer sheet without positioning point
CN111899237A (en) * 2020-07-27 2020-11-06 长沙大端信息科技有限公司 Scale precision measuring method, scale precision measuring device, computer equipment and storage medium
CN112508793A (en) * 2020-12-22 2021-03-16 深圳开立生物医疗科技股份有限公司 Image scaling method and device, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115239789A (en) * 2022-05-23 2022-10-25 华院计算技术(上海)股份有限公司 Method and device for determining liquid volume, storage medium and terminal
CN114862856A (en) * 2022-07-07 2022-08-05 成都数之联科技股份有限公司 Panel defect area identification method and device, electronic equipment and medium
CN115546208A (en) * 2022-11-25 2022-12-30 浙江托普云农科技股份有限公司 Method and device for measuring plant height of field crops and application
CN115546208B (en) * 2022-11-25 2023-07-25 浙江托普云农科技股份有限公司 Method, device and application for measuring plant height of field crops
CN117555892A (en) * 2024-01-10 2024-02-13 江苏省生态环境大数据有限公司 Atmospheric pollutant multimode fusion accounting model post-treatment method
CN117555892B (en) * 2024-01-10 2024-04-02 江苏省生态环境大数据有限公司 Atmospheric pollutant multimode fusion accounting model post-treatment method

Also Published As

Publication number Publication date
CN114494017B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN114494017B (en) Method, device, equipment and medium for adjusting DPI (deep packet inspection) image according to scale
EP3309703B1 (en) Method and system for decoding qr code based on weighted average grey method
CN106960208B (en) Method and system for automatically segmenting and identifying instrument liquid crystal number
US8712188B2 (en) System and method for document orientation detection
CN110781885A (en) Text detection method, device, medium and electronic equipment based on image processing
CN109241861B (en) Mathematical formula identification method, device, equipment and storage medium
CN109409355B (en) Novel transformer nameplate identification method and device
US11657644B2 (en) Automatic ruler detection
CN111259878A (en) Method and equipment for detecting text
CN108009536A (en) Scan method to go over files and system
CN115205223B (en) Visual inspection method and device for transparent object, computer equipment and medium
CN110660072B (en) Method and device for identifying straight line edge, storage medium and electronic equipment
CN109948521B (en) Image deviation rectifying method and device, equipment and storage medium
US10395090B2 (en) Symbol detection for desired image reconstruction
CN111899270A (en) Card frame detection method, device and equipment and readable storage medium
CN113888446A (en) Intelligent detection method for bending line of sheet metal structural part
CN113283439B (en) Intelligent counting method, device and system based on image recognition
CN116862910A (en) Visual detection method based on automatic cutting production
CN111008635A (en) OCR-based multi-bill automatic identification method and system
CN116402771A (en) Defect detection method and device and model training method and device
CN116030472A (en) Text coordinate determining method and device
CN115100663A (en) Method and device for estimating distribution situation of character height in document image
CN113780278A (en) Method and device for identifying license plate content, electronic equipment and storage medium
CN113378847A (en) Character segmentation method, system, computer device and storage medium
Amarnath et al. Automatic localization and extraction of tables from handheld mobile-camera captured handwritten document images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wei Kai

Inventor after: Wang Xinan

Inventor after: Wang Junqi

Inventor after: Wang Gang

Inventor after: Tang Linpeng

Inventor after: Tai Cheng

Inventor before: Wei Kai

Inventor before: Wang Xinan

Inventor before: Wang Junqi

Inventor before: Wang Gang

Inventor before: Tang Linpeng

Inventor before: Tai Cheng

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant