CN116107394A - Adjustment method, adjustment device, electronic equipment and storage medium - Google Patents

Adjustment method, adjustment device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116107394A
CN116107394A CN202310383336.XA CN202310383336A CN116107394A CN 116107394 A CN116107394 A CN 116107394A CN 202310383336 A CN202310383336 A CN 202310383336A CN 116107394 A CN116107394 A CN 116107394A
Authority
CN
China
Prior art keywords
gap
image
target
gray level
edge detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310383336.XA
Other languages
Chinese (zh)
Other versions
CN116107394B (en
Inventor
王占营
王静雅
常霞
汪晓雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Lianbao Information Technology Co Ltd
Original Assignee
Hefei Lianbao Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Lianbao Information Technology Co Ltd filed Critical Hefei Lianbao Information Technology Co Ltd
Priority to CN202310383336.XA priority Critical patent/CN116107394B/en
Publication of CN116107394A publication Critical patent/CN116107394A/en
Application granted granted Critical
Publication of CN116107394B publication Critical patent/CN116107394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1675Miscellaneous details related to the relative movement between the different enclosures or enclosure parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Adjustment Of The Magnetic Head Position Track Following On Tapes (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Record Information Processing For Printing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an adjusting method, an adjusting device, electronic equipment and a storage medium, wherein a first gap is formed between a rotating shaft of the electronic equipment and an equipment shell at a first position, and a second gap is formed between the rotating shaft of the electronic equipment and the equipment shell at a second position; the method comprises the following steps: acquiring a first initial image for a first gap and a second initial image for a second gap; preprocessing a first initial image and a second initial image respectively to obtain a first target image and a second target image; obtaining a first target area of a first target image and a second target area of a second target image, wherein the first target area comprises a first gap and the second target area comprises a second gap; determining a first target pixel of a first target region and a second target pixel of a second target region; based on the first target pixel and the second target pixel, it is determined whether to adjust the first gap and/or the second gap. Technical support is provided for improving the accuracy of gap adjustment.

Description

Adjustment method, adjustment device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an adjustment method, an adjustment device, an electronic device, and a storage medium.
Background
In an electronic device such as a notebook computer, a gap exists between a housing of the electronic device (such as a housing where a screen is located) and a rotating shaft of the electronic device, and if the gap error is too large, the housing cannot be turned normally. At present, manual adjustment is mainly performed through a correction jig, and the manual adjustment mode is not intelligent enough and is not accurate enough.
Disclosure of Invention
The application provides an adjusting method, an adjusting device, electronic equipment and a storage medium, which are used for at least solving the technical problems in the prior art.
According to a first aspect of the present application, there is provided an adjustment method applied to an electronic device, the electronic device including a spindle and a device housing; the rotating shaft is provided with a first gap between a first position and the equipment shell, and a second gap between a second position and the equipment shell; the method comprises the following steps:
acquiring a first initial image for a first gap and a second initial image for a second gap;
preprocessing the first initial image and the second initial image respectively to obtain a first target image and a second target image;
obtaining a first target area of a first target image and a second target area of a second target image, wherein the first target area comprises a first gap and the second target area comprises a second gap;
Determining a first target pixel of the first target area and a second target pixel of the second target area;
based on the first target pixel and the second target pixel, it is determined whether to adjust the first gap and/or the second gap.
In the above scheme, the preprocessing the first initial image and the second initial image to obtain a first target image and a second target image includes:
respectively carrying out graying treatment on the first initial image and the second initial image to obtain a first gray image and a second gray image;
obtaining a first target image based on the first gray level image;
and obtaining a second target image based on the second gray level image.
In the above scheme, the obtaining the first target image based on the first gray scale image includes:
performing edge detection and binarization processing on the first gray level image to obtain a first target image;
the obtaining a second target image based on the second gray level image includes:
and performing edge detection and binarization processing on the second gray level image to obtain a second target image.
In the above scheme, performing edge detection and binarization processing on the first gray scale image to obtain a first target image includes:
Performing edge detection on the first gray level image;
traversing the region including the first gap in the first gray level image after edge detection to obtain N segmentation thresholds; n is a positive integer greater than or equal to 1;
according to the N segmentation thresholds, respectively carrying out binarization processing on the region including the first gap in the first gray level image after edge detection to obtain N binarized images;
from the N binarized images, a first target image is determined.
In the above scheme, the performing edge detection and binarization processing on the second gray level image to obtain a second target image includes:
performing edge detection on the second gray level image;
traversing the region including the second gap in the second gray level image after edge detection to obtain M segmentation thresholds; m is a positive integer greater than or equal to 1;
according to the M segmentation thresholds, respectively carrying out binarization processing on the region including the second gap in the second gray level image after edge detection to obtain M binarized images;
and determining a second target image from the M binarized images.
In the above solution, the determining, based on the first target pixel and the second target pixel, whether to adjust the first gap and/or the second gap includes:
Determining the actual distance of the first gap and the actual distance of the second gap based on preset mapping relations between the number of the first target pixels and the number of the second target pixels and the actual distance of the gaps respectively;
based on the actual spacing of the first gap and the actual spacing of the second gap, it is determined whether to adjust the first gap and/or the second gap.
In the above scheme, when the difference between the actual distance of the first gap and the actual distance of the second gap meets the preset condition, the first gap and/or the second gap is/are determined to be adjusted.
In the above scheme, the electronic device further comprises a motor for driving the device shell to move;
the method further comprises the steps of:
determining a gap adjustment value based on the actual spacing of the first gap and the actual spacing of the second gap;
based on the gap adjustment value, the driving motor drives the equipment shell to move so as to adjust the first gap and/or the second gap.
According to a second aspect of the present application, there is provided an adjustment device for use in an electronic apparatus comprising a spindle and an apparatus housing, the spindle having a first gap between the spindle and the apparatus housing at a first position and a second gap between the spindle and the apparatus housing at a second position; the device comprises:
A first acquisition unit configured to acquire a first initial image for a first gap and a second initial image for a second gap;
the preprocessing unit is used for preprocessing the first initial image and the second initial image respectively to obtain a first target image and a second target image;
a second acquisition unit configured to acquire a first target area of a first target image and a second target area of a second target image, wherein the first target area includes a first gap, and the second target area includes a second gap;
a first determining unit configured to determine a first target pixel of the first target area and a second target pixel of the second target area;
and the second determining unit is used for determining whether to adjust the first gap and/or the second gap based on the first target pixel and the second target pixel.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described herein.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method described herein.
In the application, a first target image and a second target image are obtained based on preprocessing an acquired first initial image aiming at a first gap and a second initial image aiming at a second gap. Whether to adjust the first gap and/or the second gap is determined based on a first target pixel in a first target region including the first gap in the first target image and a second target pixel in a second target region including the second gap in the second target image. The gap is adjusted in an image processing mode, compared with a manual adjustment mode in the related art, the gap is intelligently adjusted, and the gap can be accurately adjusted in the image processing mode, so that the adjustment accuracy is improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 shows a schematic view of a first gap and a second gap in an embodiment of the present application;
FIG. 2 is a schematic diagram of an implementation flow of an adjustment method according to an embodiment of the present application;
FIG. 3 shows a schematic application diagram in an embodiment of the present application;
fig. 4 is a schematic diagram showing the composition and structure of an adjusting device according to an embodiment of the present application;
fig. 5 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present application more obvious and understandable, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It can be understood that in the process of manufacturing the casing of the electronic device, the gap error between the casing of the device and the rotating shaft is very important, and if the gap error is too large, the damage of the rotating shaft can be accelerated, so that the casing of the device cannot be turned normally, and the quality of the electronic device is greatly reduced. In practical application, taking electronic equipment as a portable notebook computer as an example, as shown in fig. 1, in the process of producing the notebook computer, a left gap is formed between a screen end shell of the notebook computer and a rotating shaft, a right gap is formed on the right side, in order to enable the screen shell of the computer to normally overturn, the error between the left gap and the right gap is required to be ensured to be within a standard error range, because the gap distance is generally small, the requirement on accuracy is relatively high, the accuracy is easily influenced by deformation tolerance of a plastic part of a shell of the equipment, the mode of manually adjusting by only adopting a jig is not intelligent enough at present, and the accuracy of adjustment is difficult to be ensured. If the accuracy of gap adjustment can be improved, the manufacturing cost can be saved, and the quality of the electronic equipment can be ensured.
In the embodiment of the application, the first target image and the second target image are obtained based on preprocessing of the acquired first initial image aiming at the first gap and the acquired second initial image aiming at the second gap. Whether to adjust the first gap and/or the second gap is determined based on a first target pixel in a first target region including the first gap in the first target image and a second target pixel in a second target region including the second gap in the second target image. The gap is adjusted in an image processing mode, compared with a manual adjustment mode in the related art, the gap is intelligently adjusted, and the gap can be accurately adjusted in the image processing mode, so that the adjustment accuracy is improved.
The following describes the adjustment method in the embodiment of the present application in detail.
The embodiment of the application provides an adjusting method which is applied to electronic equipment, wherein the electronic equipment comprises a rotating shaft and an equipment shell; the rotating shaft is provided with a first gap between a first position and the equipment shell, and a second gap between a second position and the equipment shell; as shown in fig. 2, the method includes:
s201: a first initial image for a first gap and a second initial image for a second gap are acquired.
In this step, a first initial image for the first gap and a second initial image for the second gap are acquired by performing image acquisition on the first gap and the second gap. The adjustment device may, for example, take an image of the first gap and the second gap by means of a camera, respectively. Taking an electronic device as a notebook computer as an example, the first gap refers to one of the left gap and the right gap shown in fig. 1. The second gap refers to the other gap than the first gap among the left and right gaps shown in fig. 1. And adopting two cameras to acquire images of the left gap and the right gap respectively, so as to obtain a first initial image aiming at the first gap and a second initial image aiming at the second gap. The first initial image is an image at least comprising a first gap, and the second initial image is an image at least comprising a second gap. For example, in the case where the electronic device is a portable notebook computer, the first initial image may be an image obtained by image capturing or acquiring only for the left gap, and the first initial image may also be an image obtained by image capturing or acquiring for the left gap and the surrounding area of the left gap. The second initial image may be an image obtained by image capturing or capturing only for the right gap, or an image obtained by image capturing or capturing for the right gap and the surrounding area of the right gap.
S202: and preprocessing the first initial image and the second initial image respectively to obtain a first target image and a second target image.
In the application, a first initial image is preprocessed to obtain a first target image. And preprocessing the second initial image to obtain a second target image.
S203: a first target region of the first target image and a second target region of the second target image are obtained, wherein the first target region comprises a first gap and the second target region comprises a second gap.
In this step, the first target image and the second target image are obtained by preprocessing a first initial image and a second initial image, respectively, where the first target image includes a first target area and other areas except the first target area, and the second target image includes a second target area and other areas except the second target area. The first target area is an area including a first gap, and the second target area is an area including a second gap. That is, the first target region is a region corresponding to the first gap in the first target image, and the second target region is a region corresponding to the second gap in the second target image.
By identifying the first target area of the first target image and the second target area of the second target image, other areas in the first target image and the second target image are eliminated, and only the first target area corresponding to the first gap and the second target area corresponding to the second gap are processed, so that the calculated amount can be reduced, and the operation result is more accurate.
S204: a first target pixel of the first target region and a second target pixel of the second target region are determined.
In this step, the first target area and the second target area include a plurality of pixel points, and the pixel points in the first target area are identified as first target pixel points. And identifying the pixel points in the second target area as second target pixel points. And further a first target pixel of the first target region and a second target pixel of the second target region may be determined. Since the first target region is a region corresponding to the first gap and the second target region is a region corresponding to the second gap, the first target pixel is a pixel constituting the first gap and the second target pixel is a pixel constituting the second gap.
S205: based on the first target pixel and the second target pixel, it is determined whether to adjust the first gap and/or the second gap.
In this step, it may be determined whether an error between the first gap and the second gap meets a standard requirement based on the determined attribute of the first target pixel and the determined attribute of the second target pixel, so as to determine whether to adjust the first gap and/or the second gap. When the error between the first gap and the second gap does not meet the standard requirement, the first gap and/or the second gap need to be adjusted. For example, when the error between the first gap and the second gap exceeds the standard error range (does not meet the standard requirement or the preset condition), the motor may drive the device housing to move towards the first gap direction and/or the second gap direction, so as to reduce the first gap and/or the second gap, thereby adjusting the first gap and/or the second gap.
In the scheme shown in S201 to S205, a first target image and a second target image are obtained based on preprocessing of the acquired first initial image for the first gap and the acquired second initial image for the second gap. Whether to adjust the first gap and/or the second gap is determined based on a first target pixel in a first target region including the first gap in the first target image and a second target pixel in a second target region including the second gap in the second target image. The gap is adjusted in an image processing mode, compared with a manual adjustment mode in the related art, the gap is intelligently adjusted, and the gap can be accurately adjusted in the image processing mode, so that the adjustment accuracy is improved. The qualification rate of leaving the factory of the electronic equipment is improved, automatic gap adjustment is realized, and the practicability is high.
In an alternative solution, the preprocessing the first initial image and the second initial image to obtain a first target image and a second target image respectively includes:
respectively carrying out graying treatment on the first initial image and the second initial image to obtain a first gray image and a second gray image;
obtaining a first target image based on the first gray level image;
and obtaining a second target image based on the second gray level image.
In the present application, the first initial image and the second initial image are subjected to the gradation processing, respectively, so that the initial image is changed from the original color image to the gradation image. The graying processing method includes a maximum value method, an average value method, a weighted average value method, and the like, and is not limited herein.
It is contemplated that a grayscale image can represent a large portion of the features of the image with less data information. In the method, the original image is subjected to gray processing, and the target image is obtained based on the gray image, so that the processing speed can be increased, and the image contrast can be enhanced. The method is simple and feasible in engineering and high in reliability.
In an alternative solution, the obtaining a first target image based on the first gray scale image includes:
And performing edge detection and binarization processing on the first gray level image to obtain a first target image.
In an optional aspect, the obtaining a second target image based on the second gray level image includes:
and performing edge detection and binarization processing on the second gray level image to obtain a second target image.
In the application, edge detection and binarization processing are respectively carried out on the first gray level image and the second gray level image, so that a first target image and a second target image are obtained. Further, edge detection can be performed on the first gray level image based on an edge detection algorithm, so as to obtain a first edge image. And carrying out edge detection on the second gray level image based on an edge detection algorithm to obtain a second edge image. Then, binarization processing is carried out on the first edge image, and a first target image is obtained. And performing binarization processing on the second edge image to obtain a second target image.
In the conventional image edge detection process, a multi-level edge detection algorithm (Canny edge detection algorithm) is the best compromise algorithm between noise suppression and accurate edge feature preservation. The anti-noise edge detection algorithm is obviously improved in the anti-noise aspect compared with the Canny edge detection algorithm.
Mathematical morphology is used as a tool for extracting image components by an edge detection algorithm, and structural elements with proper structures and sizes can be respectively adopted to extract the image components according to the characteristics of the components to be extracted. Specifically, for the element set B on the plane, the structural element C is used to perform morphological processing on the element set B, so that the following four basic operations can be obtained:
the morphological corrosion of B by C is represented by the following formula (1):
Figure SMS_1
wherein the symbol->
Figure SMS_2
And (3) representing morphological corrosion operation, wherein S is an image set after corrosion.
(1)
The morphological expansion of B by C is represented by the following formula (2):
Figure SMS_3
wherein the symbol->
Figure SMS_4
The morphological dilation operation is represented, and S is the set of images after dilation.
(2)
The morphological open operation of B by C is shown in the following formula (3):
Figure SMS_5
wherein the symbol->
Figure SMS_6
Representing morphological open operations.
(3)
The morphological closing operation of B by C is shown in the following formula (4):
Figure SMS_7
wherein the symbol->
Figure SMS_8
Representing morphological closing operations.
(4)
The mathematical expression of the novel anti-noise edge detection algorithm introduced by the application is shown in the following formula (5):
Figure SMS_9
(5)
Wherein B in the formulas (1) - (5) is expressed as a gray image, C is expressed as a structural element for carrying out morphological operation on the gray image, and D is expressed as an edge image obtained by subtracting an image corroded after closing operation from an image expanded after opening operation on the gray image B by the structural element C. Wherein, the open operation and the corrosion operation of morphology have the inhibition effect on positive noise, and the close operation and the expansion operation of morphology have the inhibition effect on negative noise. Both the first edge image and the second edge image can be calculated by equation (5).
After obtaining the edge image using the new anti-noise edge detection algorithm, the edge detection result can be evaluated, typically using the figure of merit FOM, whose mathematical expression is shown in the following formula (6):
Figure SMS_10
/>
(6)
Wherein,,
Figure SMS_12
representing the number of actual edge pixels in the gray scale image B. />
Figure SMS_16
The number of pixels in the edge image D obtained based on the edge detection algorithm is represented. />
Figure SMS_19
Is a preset compensation coefficient. />
Figure SMS_13
Is the distance of the edge point in the edge image D to the nearest actual edge point. />
Figure SMS_14
Representation->
Figure SMS_17
And->
Figure SMS_20
The maximum of the two. />
Figure SMS_11
Indicating when r has a value from 1 to +.>
Figure SMS_15
Values within the range +.>
Figure SMS_18
Personal->
Figure SMS_21
Is a sum of values of (a). The greater the FOM value is in the range of 0 to 1, the better the effect of edge detection on the edge image D is.
According to the method and the device, the gray level image is subjected to edge detection and binarization to obtain the scheme of the target image, and the data scale of the image can be remarkably reduced under the condition that the original image attribute is maintained, so that the processing speed is increased.
In an alternative solution, the performing edge detection and binarization processing on the first gray scale image to obtain a first target image includes:
performing edge detection on the first gray level image;
Traversing the region including the first gap in the first gray level image after edge detection to obtain N segmentation thresholds; n is a positive integer greater than or equal to 1;
according to the N segmentation thresholds, respectively carrying out binarization processing on the region including the first gap in the first gray level image after edge detection to obtain N binarized images;
from the N binarized images, a first target image is determined.
Traversing the region including the first gap in the first gray level image subjected to edge detection to obtain N segmentation thresholds; n is a positive integer greater than or equal to 1; according to the N segmentation thresholds, respectively carrying out binarization processing on the region including the first gap in the first gray level image after edge detection to obtain N binarized images; from the N binarized images, determining a scheme of the first target image may be regarded as performing binarization processing on the first edge image, so as to obtain further implementation of the scheme of the first target image.
In the present application, a region including a first gap in a first gray-scale image after edge detection is traversed, gray-scale values of each pixel point in the region including the first gap are used as one division threshold, and binarization processing is performed on the region including the first gap by using the division thresholds. Illustratively, T is expressed as a division threshold, and since the pixel gray value ranges from [0, 255], the value range of T is also in the interval of [0, 255 ]. And traversing each pixel point in the region to obtain N segmentation thresholds with the value range of [0, 255 ]. And adopting each of the N segmentation thresholds to segment the foreground and the background of the region comprising the first gap. The pixel points with gray values lower than the segmentation threshold are segmented into the background map as the background map at each segmentation threshold. Pixels having gray values higher than the division threshold are divided into foreground images as foreground images at each division threshold. Wherein, the pixel points with gray values equal to the segmentation threshold can be used as the segmentation threshold for segmenting the foreground and background images, and not as the pixel points in the foreground and background images.
The image formed by the background image and the foreground image can be regarded as a binary image by setting the gray value of the pixel point of the background image to 0 and the gray value of the pixel point of the foreground image to 255 for each division threshold. Each segmentation threshold corresponds to one binarized image, and the N segmentation thresholds correspond to N binarized images.
The first target image is an image determined from the N binarized images. Specifically, an optimal segmentation threshold is determined from N segmentation thresholds in the N binarized images, and a binarized image corresponding to the optimal segmentation threshold can be used as the first target image. Wherein the optimal segmentation threshold is determined by a maximum inter-class variance method. The maximum value of the variance between the foreground image and the background image is selected from the N binarized images, and the segmentation threshold used when the binarized image corresponding to the maximum variance is acquired is used as the optimal segmentation threshold. The mathematical calculation formula of the foreground and background image variance g is shown as the following formula (7):
Figure SMS_22
(7)
Wherein,,
Figure SMS_23
the mathematical calculation formula of (2) is shown in the following formula (8):
Figure SMS_24
(8)
The final mathematical calculation formula of the foreground and background image variances g is shown in the following formula (9) by combining the formula (7) and the formula (8):
Figure SMS_25
(9)
Wherein in the formulae (7) to (9)
Figure SMS_26
The number of pixels in the foreground image is the proportion of the number of pixels in the whole binarized image; />
Figure SMS_27
Is the average gray level of the foreground image; />
Figure SMS_28
The number of pixels in the background image is the proportion of the whole binarized image; />
Figure SMS_29
Is the average gray level of the background image; />
Figure SMS_30
Is the total average gray scale of the binarized image, < >>
Figure SMS_31
Is the variance of the foreground image and the background image.
Based on the above equation (9), the variances of the foreground image and the background image in the N binarized images are calculated, the maximum value of the variances between the foreground image and the background image is selected from the N binarized images, and the segmentation threshold used when the binarized image corresponding to the maximum variance is acquired is used as the optimal segmentation threshold. That is, the first target image is a binary image corresponding to the case where the segmentation threshold T is the optimal segmentation threshold. That is, the first target image is an image having the largest variance between the foreground image and the background image among the N binarized images.
According to the method and the device, the scheme of the first target image is determined from N binarized images, engineering is simple and feasible, and accuracy of results is improved.
In an alternative solution, the performing edge detection and binarization processing on the second gray level image to obtain a second target image includes:
Performing edge detection on the second gray level image;
traversing the region including the second gap in the second gray level image after edge detection to obtain M segmentation thresholds; m is a positive integer greater than or equal to 1;
according to the M segmentation thresholds, respectively carrying out binarization processing on the region including the second gap in the second gray level image after edge detection to obtain M binarized images;
and determining a second target image from the M binarized images.
Traversing the region including the second gap in the second gray level image after edge detection to obtain M segmentation thresholds; m is a positive integer greater than or equal to 1; according to the M segmentation thresholds, respectively carrying out binarization processing on the region including the second gap in the second gray level image after edge detection to obtain M binarized images; from the M binarized images, determining a scheme of the second target image may be regarded as further implementation of the scheme of the second target image based on the binarization processing of the second edge image.
In the present application, the region including the second gap in the second gray level image after edge detection is traversed, the gray level value of each pixel point in the region including the second gap is used as one division threshold, and binarization processing is performed on the region including the second gap by using each division threshold. Illustratively, T is expressed as a division threshold, and since the pixel gray value ranges from [0, 255], the value range of T is also in the interval of [0, 255 ]. And traversing each pixel point in the region to obtain M segmentation thresholds with value ranges of [0, 255 ]. And adopting each of the M segmentation thresholds to segment the foreground and the background of the region comprising the second gap. The pixel points with gray values lower than the segmentation threshold are segmented into the background map as the background map at each segmentation threshold. Pixels having gray values higher than the division threshold are divided into foreground images as foreground images at each division threshold. Wherein, the pixel points with gray values equal to the segmentation threshold can be used as the segmentation threshold for segmenting the foreground and background images, and not as the pixel points in the foreground and background images.
The image formed by the background image and the foreground image can be regarded as a binary image by setting the gray value of the pixel point of the background image to 0 and the gray value of the pixel point of the foreground image to 255 for each division threshold. Each segmentation threshold corresponds to one binarized image, and M segmentation thresholds correspond to M binarized images.
The second target image is an image determined from the M binarized images. Specifically, an optimal segmentation threshold is determined from M segmentation thresholds in the M binarized images, and the binarized image corresponding to the optimal segmentation threshold can be used as the second target image. Wherein the optimal segmentation threshold is determined by a maximum inter-class variance method. The maximum value of the variance between the foreground image and the background image is selected from the M binarized images, and the segmentation threshold used when the binarized image corresponding to the maximum variance is acquired is used as the optimal segmentation threshold. The mathematical calculation formula of the foreground and background image variances g is as shown in the above formula (9), the foreground and background image variances in the M binarized images are calculated based on the above formula (9), the maximum value of the variances between the foreground and background images is selected from the M binarized images, and the segmentation threshold used when obtaining the binarized image corresponding to the maximum variance is used as the optimal segmentation threshold. That is, the second target image is a binarized image corresponding to the case where the segmentation threshold T is the optimal segmentation threshold. That is, the second target image is an image having the largest variance between the foreground image and the background image among the M binarized images.
According to the method and the device, the scheme of the second target image is determined from the M binarized images, engineering is simple and feasible, and accuracy of results is improved.
In an alternative solution, the determining, based on the first target pixel and the second target pixel, whether to adjust the first gap and/or the second gap includes:
determining the actual distance of the first gap and the actual distance of the second gap based on preset mapping relations between the number of the first target pixels and the number of the second target pixels and the actual distance of the gaps respectively;
based on the actual spacing of the first gap and the actual spacing of the second gap, it is determined whether to adjust the first gap and/or the second gap.
In the scheme, the mapping relation between the number of pixels and the actual gap distance is shown in the following formula (10):
Figure SMS_32
(10)
Wherein Y is the number of pixels in the target image, f is the focal length of the acquisition equipment used for acquiring the initial image, u is the linear distance between the acquisition equipment and the acquired gap used for acquiring the initial image, p is the imaging unit size of the imaging device of the acquisition equipment used for acquiring the initial image, and X is the actual distance between the acquired gap.
Collecting the number of first target pixels and the firstFocal length of the acquisition device at the time of initial image acquisition, and imaging unit size at the time of acquisition of the first initial image is substituted into formula (11) obtained from formula (10)
Figure SMS_33
The actual spacing of the first gap is obtained. Substituting the number of second target pixels, the focal length of the acquisition device at the time of acquiring the second initial image, and the imaging unit size at the time of acquiring the second initial image into formula (11) obtained according to formula (10)/(>
Figure SMS_34
The actual spacing of the second gap is obtained.
According to the method and the device, based on the number of the first target pixels, the number of the second target pixels, the focal length of the acquisition equipment and the size of the imaging unit when the initial image is acquired, the actual distance of the first gap and the actual distance of the second gap are determined, accuracy and reliability of results are guaranteed, and the qualification rate of the electronic equipment is improved.
In an alternative embodiment, the adjustment of the first gap and/or the second gap is determined if the difference between the actual distance of the first gap and the actual distance of the second gap corresponds to a predetermined condition.
In the present application, whether the first gap and/or the second gap are adjusted is determined by the difference between the actual pitches of the first gap and the second gap. In the manufacturing process of the electronic device case, standard error conditions (preset conditions) of the first gap and the second gap are: the difference between the actual spacing of the first gap and the second gap is less than or equal to a preset threshold, such as 0.2mm. That is, when the difference between the actual pitch of the first gap and the actual pitch of the second gap is greater than 0.2mm, the quality of the housing of the electronic device is not acceptable, and the first gap and/or the second gap need to be adjusted. Illustratively, when the first gap is greater than the second gap, the first gap needs to be adjusted. When the second gap is larger than the first gap, an adjustment of the second gap is required. The first gap and the second gap may also be adjusted simultaneously such that the difference between the actual spacing of the final first gap and the actual spacing of the second gap falls within preset conditions.
According to the scheme, whether the difference value of the actual distance between the first gap and the actual distance between the second gap meets the preset condition or not is determined, whether the first gap and/or the second gap are adjusted or not is determined, the consistency in the manufacturing process of the electronic equipment is improved, and the method has higher practicability and economic value.
In an alternative scheme, the electronic equipment further comprises a motor for driving the equipment shell to move;
the method further comprises the steps of:
determining a gap adjustment value based on the actual spacing of the first gap and the actual spacing of the second gap;
based on the gap adjustment value, the driving motor drives the equipment shell to move so as to adjust the first gap and/or the second gap.
In the application, the motor drives the electronic equipment shell to move through the gap adjustment value to adjust the first gap and/or the second gap. The gap adjustment value comprises the number of pulses required for motor movement, and the number of pulses and the actual spacing of the first gap and the actual spacing of the second gap have a mathematical relationship as shown in the following formula (12):
Figure SMS_35
(12)
Wherein H is the number of pulses required by the motor to drive the equipment shell to move, A is the larger value of the actual spacing of the first gap and the actual spacing of the second gap, and K is the smaller value of the actual spacing of the first gap and the actual spacing of the second gap.
When the difference between the actual spacing of the first gap and the actual spacing of the second gap is larger than 0.2mm, and the first gap and/or the second gap are/is required to be adjusted, if the actual spacing of the first gap is larger than the actual spacing of the second gap, the motor drives the equipment shell to move towards the direction of the second gap by using the required pulse quantity so as to reduce the spacing of the first gap and increase the spacing of the second gap; if the actual interval of the first gap is smaller than the actual interval of the second gap, the motor drives the equipment shell to move towards the direction of the first gap by using the required pulse number so as to reduce the interval of the second gap and increase the interval of the first gap. Wherein the moving direction of the motor-driven device housing depends on the sign of the required pulse number calculated by the formula (12), and when the sign of the required pulse number is negative, it means that the motor-driven device housing moves in the negative direction. When the required pulse number sign is positive, the motor is indicated to drive the equipment shell to move in the positive direction. The positive and negative directions are specifically determined by the directions in which the first gap and the second gap are located, and the application is not specifically limited.
According to the gap adjusting device, the equipment shell is driven by the driving motor to move based on the gap adjusting value, so that the scheme of adjusting the first gap and/or the second gap is realized, manual work is replaced, intelligent and automatic gap adjustment is realized, the accuracy of gap adjustment is improved, and the working efficiency is improved.
In one embodiment, the electronic device is taken as a portable notebook computer as an example, and the adjustment method of the present application is described.
As shown in fig. 3, the left gap (first gap) and the right gap (second gap) between the notebook computer casing and the rotating shaft are photographed by two cameras, respectively, to obtain an initial image for the left gap and an initial image for the right gap.
Calibrating the camera, and obtaining a mapping relation between the actual distance of the photographed gap and the number of pixels in the photographed image according to the imaging triangle relation based on the camera focal length, the camera photographing distance and the imaging unit size of the camera imaging device.
And carrying out gray processing on the initial image of the left gap by adopting a weighted average method to obtain a gray image of the left gap. The gray image is subjected to edge detection by adopting an edge detection algorithm (formula (5)) of the application, and an edge image comprising a left gap area is obtained. And (5) carrying out binarization processing on the left gap area in the edge image based on a maximum inter-class variance method. Specifically, all pixel points in a left gap area in the edge map are traversed, and the gray value of each pixel point is used as a segmentation threshold value to obtain N segmentation threshold values. And performing binarization processing on the left gap region by adopting each of the N segmentation thresholds to obtain N binarized images. And calculating the foreground and background variances in each binarized image, taking the binarized image corresponding to the maximum value in the variances as a target image aiming at the left gap, and taking the segmentation threshold of the target image as the optimal segmentation threshold of the left gap region.
And carrying out gray processing on the initial image of the right gap by adopting a weighted average method to obtain a gray image of the right gap. And (3) performing edge detection on the gray level image by adopting an edge detection algorithm (formula (5)) to obtain an edge image comprising a right gap region. And (5) carrying out binarization processing on the right gap area in the edge image based on a maximum inter-class variance method. Specifically, all pixel points in a right gap area in the edge map are traversed, and the gray value of each pixel point is used as a segmentation threshold value to obtain M segmentation threshold values. And performing binarization processing on the right gap region by adopting each of the M segmentation thresholds to obtain M binarized images. And calculating the foreground and background variances in each binarized image, taking the binarized image corresponding to the maximum value in the variances as a target image aiming at the right gap, and taking the segmentation threshold of the target image as the optimal segmentation threshold of the right gap region.
In practical application, the optimal segmentation threshold value of the right gap region and the optimal segmentation threshold value of the left gap region may be the same, and may be different according to the specific situation.
The number of pixels of the left gap region in the target image for the left gap and the number of pixels of the right gap region in the target image for the right gap are determined. Substituting the pixel numbers of the left and right gap areas into a formula (11) which is satisfied by the actual gap distance and the pixel number in the photographed image to obtain the actual gap distances of the left and right gaps.
Judging whether the error of the actual distance between the left gap and the right gap is within the range of 0.2mm under the preset condition, if the error exceeds the range of 0.2mm, and when the left gap is larger than the right gap, the motor drives the equipment shell to move towards the right gap so as to reduce the left gap and increase the right gap. When the right gap is larger than the left gap, the motor drives the equipment shell to move towards the left gap, so that the right gap is reduced, and the left gap is increased. The distance that the motor drives the equipment shell to move depends on the number of pulses received by the motor, and the number of pulses (gap adjustment value) required by the motor when the left gap and/or the right gap are adjusted can be obtained according to the mathematical relationship between the actual distance between the left gap and the right gap and the number of pulses required by the motor to drive the equipment shell to move.
The actual distance between the left and right gaps obtained by gap adjustment of a plurality of notebook computers and the number of pulses required for motor adjustment of the left and/or right gaps are shown in table 1.
TABLE 1
Figure SMS_36
As shown in the table above, the number of pulses required is divided into positive and negative, which means that the motor drives the equipment housing to move in different directions, and negative means that the equipment housing moves leftwards, and positive means that the equipment housing moves rightwards.
In an exemplary case, when the difference between the actual distance of the left gap and the actual distance of the right gap exceeds 0.2mm, in the gap adjustment process of the notebook computer with the serial number of 1, the actual distance of the left gap is smaller than the actual distance of the right gap, and the required pulse number is negative, that is, the required pulse number is 15, which means that the motor needs to drive the equipment shell to move towards the left gap direction. In the gap adjustment process of the notebook computer with the serial number of 4, the actual distance of the right gap is smaller than the actual distance of the left gap, and the number of the required pulses is positive, namely the number of 25 pulses required by the motor drives the equipment shell to move towards the right gap. It will be appreciated that the number of pulses determines the movement distance in a specific direction, and the movement distance corresponding to the number of pulses may be determined according to practical situations, and is not specifically limited herein. Herein, positive and negative of the number of pulses indicate a moving direction, and a moving distance corresponding to a specific number of pulses (including positive and negative of a numerical value) may be used as a gap adjustment value for adjusting the gap in the specific moving direction of the housing, so that the left gap and/or the right gap can be adjusted by using the gap adjustment value.
Based on the obtained initial images for the left gap and the initial images for the right gap, the gap is adjusted in an image processing mode, and compared with a manual adjustment mode in the related art, the gap is intelligently adjusted, and the accurate adjustment of the gap can be achieved in the image processing mode. The qualification rate of the electronic equipment is improved, automatic gap adjustment is realized, and the practicability is high.
The foregoing is a description of the adjustment method of the present application taking an electronic device as a portable notebook computer as an example, where the electronic device includes a device housing and a rotating shaft, the rotating shaft has a first gap between a first position and the device housing, and a second gap between a second position and the device housing, which is not repeated.
The embodiment of the application provides an adjusting device, which is applied to electronic equipment, wherein the electronic equipment comprises a rotating shaft and an equipment shell, the rotating shaft is provided with a first gap between a first position and the equipment shell, and a second gap between a second position and the equipment shell; as shown in fig. 4, the apparatus includes:
a first acquisition unit 401 for acquiring a first initial image for a first gap and a second initial image for a second gap;
a preprocessing unit 402, configured to perform preprocessing on the first initial image and the second initial image, to obtain a first target image and a second target image;
a second obtaining unit 403, configured to obtain a first target area of the first target image and a second target area of the second target image, where the first target area includes a first gap, and the second target area includes a second gap;
A first determining unit 404, configured to determine a first target pixel of the first target area and a second target pixel of the second target area;
a second determining unit 405, configured to determine whether to adjust the first gap and/or the second gap based on the first target pixel and the second target pixel.
In an alternative solution, the preprocessing unit 402 is configured to perform graying processing on the first initial image and the second initial image, so as to obtain a first gray scale image and a second gray scale image; obtaining a first target image based on the first gray level image; and obtaining a second target image based on the second gray level image.
In an alternative solution, the preprocessing unit 402 is configured to perform edge detection and binarization processing on the first gray scale image to obtain a first target image; and performing edge detection and binarization processing on the second gray level image to obtain a second target image.
In an alternative, the preprocessing unit 402 is configured to perform edge detection on the first gray scale image; traversing the region including the first gap in the first gray level image after edge detection to obtain N segmentation thresholds; n is a positive integer greater than or equal to 1; according to the N segmentation thresholds, respectively carrying out binarization processing on the region including the first gap in the first gray level image after edge detection to obtain N binarized images; from the N binarized images, a first target image is determined.
In an alternative, the preprocessing unit 402 is configured to perform edge detection on the second gray level image; traversing the region including the second gap in the second gray level image after edge detection to obtain M segmentation thresholds; m is a positive integer greater than or equal to 1; according to the M segmentation thresholds, respectively carrying out binarization processing on the region including the second gap in the second gray level image after edge detection to obtain M binarized images; and determining a second target image from the M binarized images.
In an alternative solution, the second determining unit 405 is configured to determine an actual pitch of the first gap and an actual pitch of the second gap based on a preset mapping relationship between the number of the first target pixels and the number of the second target pixels, and the actual pitch of the gaps, respectively; based on the actual spacing of the first gap and the actual spacing of the second gap, it is determined whether to adjust the first gap and/or the second gap.
In an alternative solution, the second determining unit 405 is configured to determine to adjust the first gap and/or the second gap if a difference between the actual pitch of the first gap and the actual pitch of the second gap meets a preset condition.
In an alternative scheme, the electronic equipment further comprises a motor for driving the equipment shell to move; the second determining unit 405 is configured to determine a gap adjustment value based on the actual gap between the first gap and the actual gap between the second gap; based on the gap adjustment value, the driving motor drives the equipment shell to move so as to adjust the first gap and/or the second gap.
It should be noted that, in the adjusting device of the embodiment of the present application, since the principle of solving the problem of the device is similar to that of the foregoing adjusting method, the implementation process, implementation principle and beneficial effect of the device can be referred to the description of the implementation process, implementation principle and beneficial effect of the foregoing method, and the repetition is omitted.
In practical application, the electronic device includes a lens, a light source, and two cameras, where the two cameras are respectively used for photographing the first gap and the second gap under the aforementioned lens and light source to obtain a first initial image for the first gap and a second initial image for the second gap. The two cameras are connected with the industrial personal computer in the preprocessing unit 402 through an industrial personal computer cable.
In implementation, the first acquiring unit 401 may be the aforementioned two cameras, or a virtual unit, and obtains the first initial image and the second initial image by reading the images captured by the cameras.
In operation, the preprocessing unit 402 is an industrial personal computer, and processes the received image data.
In practice, the second determining unit 405 may be implemented by a programmable logic controller and an adjusting positioning mechanism. The programmable logic controller is used for receiving the gap adjustment value and driving the motor to move based on the gap adjustment value. The adjusting and positioning mechanism is designed with a two-axis mechanism, so that the whole adjusting process has the freedom degrees in the front, back, left and right directions for adjustment. Meanwhile, the adjusting and positioning mechanism is also used for fixing the electronic equipment, so that the electronic equipment is ensured to move only by the equipment shell, and the gap between the rotating shaft and the equipment shell is adjusted.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
Fig. 5 shows a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM502, and RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as the adjustment method. For example, in some embodiments, the adjustment method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM502 and/or the communication unit 509. When the computer program is loaded into RAM503 and executed by the computing unit 501, one or more steps of the adjustment method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the adjustment method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An adjusting method is characterized in that the method is applied to electronic equipment, and the electronic equipment comprises a rotating shaft and an equipment shell; the rotating shaft is provided with a first gap between a first position and the equipment shell, and a second gap between a second position and the equipment shell; the method comprises the following steps:
acquiring a first initial image for a first gap and a second initial image for a second gap;
preprocessing the first initial image and the second initial image respectively to obtain a first target image and a second target image;
obtaining a first target area of a first target image and a second target area of a second target image, wherein the first target area comprises a first gap and the second target area comprises a second gap;
determining a first target pixel of the first target area and a second target pixel of the second target area;
based on the first target pixel and the second target pixel, it is determined whether to adjust the first gap and/or the second gap.
2. The method according to claim 1, wherein preprocessing the first and second initial images to obtain a first target image and a second target image, respectively, comprises:
Respectively carrying out graying treatment on the first initial image and the second initial image to obtain a first gray image and a second gray image;
obtaining a first target image based on the first gray level image;
and obtaining a second target image based on the second gray level image.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the obtaining a first target image based on the first gray level image comprises the following steps:
performing edge detection and binarization processing on the first gray level image to obtain a first target image;
the obtaining a second target image based on the second gray level image includes:
and performing edge detection and binarization processing on the second gray level image to obtain a second target image.
4. A method according to claim 3, wherein performing edge detection and binarization processing on the first gray scale image to obtain a first target image comprises:
performing edge detection on the first gray level image;
traversing the region including the first gap in the first gray level image after edge detection to obtain N segmentation thresholds; n is a positive integer greater than or equal to 1;
according to the N segmentation thresholds, respectively carrying out binarization processing on the region including the first gap in the first gray level image after edge detection to obtain N binarized images;
From the N binarized images, a first target image is determined.
5. A method according to claim 3, wherein performing edge detection and binarization processing on the second gray level image to obtain a second target image comprises:
performing edge detection on the second gray level image;
traversing the region including the second gap in the second gray level image after edge detection to obtain M segmentation thresholds; m is a positive integer greater than or equal to 1;
according to the M segmentation thresholds, respectively carrying out binarization processing on the region including the second gap in the second gray level image after edge detection to obtain M binarized images;
and determining a second target image from the M binarized images.
6. The method of any one of claims 1 to 5, wherein determining whether to adjust the first gap and/or the second gap based on the first target pixel and the second target pixel comprises:
determining the actual distance of the first gap and the actual distance of the second gap based on preset mapping relations between the number of the first target pixels and the number of the second target pixels and the actual distance of the gaps respectively;
based on the actual spacing of the first gap and the actual spacing of the second gap, it is determined whether to adjust the first gap and/or the second gap.
7. The method of claim 6, wherein the step of providing the first layer comprises,
and under the condition that the difference value between the actual distance of the first gap and the actual distance of the second gap meets the preset condition, determining to adjust the first gap and/or the second gap.
8. The method of claim 7, wherein the electronic device further comprises a motor for moving the device housing;
the method further comprises the steps of:
determining a gap adjustment value based on the actual spacing of the first gap and the actual spacing of the second gap;
based on the gap adjustment value, the driving motor drives the equipment shell to move so as to adjust the first gap and/or the second gap.
9. An adjustment device, characterized in that the device is applied to an electronic device, the electronic device comprises a rotating shaft and a device housing, the rotating shaft has a first gap between a first position and the device housing, and has a second gap between a second position and the device housing; the device comprises:
a first acquisition unit configured to acquire a first initial image for a first gap and a second initial image for a second gap;
the preprocessing unit is used for preprocessing the first initial image and the second initial image respectively to obtain a first target image and a second target image;
A second acquisition unit configured to acquire a first target area of a first target image and a second target area of a second target image, wherein the first target area includes a first gap, and the second target area includes a second gap;
a first determining unit configured to determine a first target pixel of the first target area and a second target pixel of the second target area;
and the second determining unit is used for determining whether to adjust the first gap and/or the second gap based on the first target pixel and the second target pixel.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-8.
CN202310383336.XA 2023-04-06 2023-04-06 Adjustment method, adjustment device, electronic equipment and storage medium Active CN116107394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310383336.XA CN116107394B (en) 2023-04-06 2023-04-06 Adjustment method, adjustment device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310383336.XA CN116107394B (en) 2023-04-06 2023-04-06 Adjustment method, adjustment device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116107394A true CN116107394A (en) 2023-05-12
CN116107394B CN116107394B (en) 2023-08-04

Family

ID=86258281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310383336.XA Active CN116107394B (en) 2023-04-06 2023-04-06 Adjustment method, adjustment device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116107394B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120281878A1 (en) * 2009-11-25 2012-11-08 Honda Motor Co., Ltd. Target-object distance measuring device and vehicle mounted with the device
US20140270540A1 (en) * 2013-03-13 2014-09-18 Mecommerce, Inc. Determining dimension of target object in an image using reference object
US20180018497A1 (en) * 2015-02-13 2018-01-18 Byd Company Limited Method and device for calculating line distance
CN111444904A (en) * 2020-03-23 2020-07-24 Oppo广东移动通信有限公司 Content identification method and device and electronic equipment
CN112022191A (en) * 2020-09-03 2020-12-04 上海联影医疗科技股份有限公司 Positioning method and system
CN113033550A (en) * 2021-03-15 2021-06-25 合肥联宝信息技术有限公司 Image detection method and device and computer readable medium
CN113920083A (en) * 2021-09-30 2022-01-11 北京达佳互联信息技术有限公司 Image-based size measurement method and device, electronic equipment and storage medium
CN115809999A (en) * 2022-12-07 2023-03-17 苏州镁伽科技有限公司 Method and device for detecting target object on device, electronic equipment and storage medium
CN115909353A (en) * 2022-12-07 2023-04-04 中国工商银行股份有限公司 Image binarization processing method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120281878A1 (en) * 2009-11-25 2012-11-08 Honda Motor Co., Ltd. Target-object distance measuring device and vehicle mounted with the device
US20140270540A1 (en) * 2013-03-13 2014-09-18 Mecommerce, Inc. Determining dimension of target object in an image using reference object
US20180018497A1 (en) * 2015-02-13 2018-01-18 Byd Company Limited Method and device for calculating line distance
CN111444904A (en) * 2020-03-23 2020-07-24 Oppo广东移动通信有限公司 Content identification method and device and electronic equipment
CN112022191A (en) * 2020-09-03 2020-12-04 上海联影医疗科技股份有限公司 Positioning method and system
CN113033550A (en) * 2021-03-15 2021-06-25 合肥联宝信息技术有限公司 Image detection method and device and computer readable medium
CN113920083A (en) * 2021-09-30 2022-01-11 北京达佳互联信息技术有限公司 Image-based size measurement method and device, electronic equipment and storage medium
CN115809999A (en) * 2022-12-07 2023-03-17 苏州镁伽科技有限公司 Method and device for detecting target object on device, electronic equipment and storage medium
CN115909353A (en) * 2022-12-07 2023-04-04 中国工商银行股份有限公司 Image binarization processing method and device

Also Published As

Publication number Publication date
CN116107394B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
EP2085928B1 (en) Detection of blobs in images
US20130136338A1 (en) Methods and Apparatus for Correcting Disparity Maps using Statistical Analysis on Local Neighborhoods
EP3798975B1 (en) Method and apparatus for detecting subject, electronic device, and computer readable storage medium
CN116563282B (en) Drilling tool detection method and system based on machine vision
CN107346547B (en) Monocular platform-based real-time foreground extraction method and device
CN116433701B (en) Workpiece hole profile extraction method, device, equipment and storage medium
CN104483712A (en) Method, device and system for detecting invasion of foreign objects in power transmission line
CN116107394B (en) Adjustment method, adjustment device, electronic equipment and storage medium
CN114037087A (en) Model training method and device, depth prediction method and device, equipment and medium
CN115409856B (en) Lung medical image processing method, device, equipment and storage medium
CN116385415A (en) Edge defect detection method, device, equipment and storage medium
CN115546764A (en) Obstacle detection method, device, equipment and storage medium
CN115841632A (en) Power transmission line extraction method and device and binocular ranging method
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN113382134B (en) Focusing debugging method of linear array industrial camera
CN108664978B (en) Character segmentation method and device for fuzzy license plate
CN117876475A (en) Image acquisition method, device, equipment and storage medium
CN118097100A (en) Equipment card slot positioning method, device, equipment and storage medium
CN116934739A (en) Image processing method, device, equipment and storage medium
CN115420207A (en) Shield machine tail gap measuring method, device, equipment and medium
CN116363400A (en) Vehicle matching method and device, electronic equipment and storage medium
CN117764913A (en) Image detection method, device, electronic equipment and storage medium
CN117456329A (en) Automatic recognition and gate slot detection method for gate warehouse gate based on vision
CN116596941A (en) Image segmentation method, device, equipment and storage medium
CN117437391A (en) Image detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant