WO2023280285A1 - 对焦方法、装置及存储介质 - Google Patents
对焦方法、装置及存储介质 Download PDFInfo
- Publication number
- WO2023280285A1 WO2023280285A1 PCT/CN2022/104462 CN2022104462W WO2023280285A1 WO 2023280285 A1 WO2023280285 A1 WO 2023280285A1 CN 2022104462 W CN2022104462 W CN 2022104462W WO 2023280285 A1 WO2023280285 A1 WO 2023280285A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- lens
- score
- code
- code reading
- maximum value
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000007246 mechanism Effects 0.000 claims description 25
- 230000007423 decrease Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 9
- 238000004891 communication Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
- G02B7/36—Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/10544—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
- G06K7/10792—Special measures in relation to the object to be scanned
- G06K7/10801—Multidistance reading
- G06K7/10811—Focalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/02—Mountings, adjusting means, or light-tight connections, for optical elements for lenses
- G02B7/04—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
- G02B7/08—Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B13/00—Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
- G03B13/32—Means for focusing
- G03B13/34—Power focusing
- G03B13/36—Autofocus systems
Definitions
- the present application relates to the technical field of image acquisition, and in particular to a focusing method, device and storage medium.
- code-reading cameras for reading two-dimensional codes and barcodes are widely used in industrial scenarios such as express logistics.
- the distance between the code-reading camera and the working plane often changes.
- an auto-focus method can usually be used to quickly focus the code-reading camera on the working plane.
- the lens of the code-reading camera moves to a position, a frame of image is collected at the position. Afterwards, the image clarity score of the collected image is calculated by using the image autocorrelation method or the image high-frequency component method. In this way, after obtaining the image clarity score when the lens of the code-reading camera is in multiple positions, the position with the highest image clarity score is taken as the best position of the lens, and then the lens is driven to move to the best position to complete autofocus .
- the embodiment of the present application provides a focusing method, device and storage medium, which can ensure the success rate of code reading after focusing while ensuring the clarity of the focusing code reading camera. Described technical scheme is as follows:
- a focusing method comprising:
- Focusing is done according to the best position of said lens.
- the determining the optimal position of the lens according to the image clarity scores and code reading scores of the multiple positions includes:
- Focusing is done according to the best position of said lens.
- the obtaining the image clarity score and the code reading score when the lens of the code reading camera is in multiple positions includes:
- the starting point of search is the starting point or the end point of the travel of the lens
- the starting point of the travel of the lens is the target of the lens and the object to be read
- the position point where the vertical distance of the surface is the largest, and the stroke end point of the lens is the position point where the vertical distance between the lens and the target surface is the smallest;
- the determining at least one rough position of the lens from the multiple positions according to the image clarity scores at the multiple positions includes:
- the first position is used as the rough position of the lens.
- the determining at least one rough position of the lens from the multiple positions according to the image clarity scores at the multiple positions includes:
- the determined second position is used as at least one rough position of the lens.
- the determining the optimal position of the lens according to at least one rough position of the lens and the code reading scores at the multiple positions includes:
- the optimal position of the lens is determined.
- the determining the optimal position of the lens according to the local maximum value of the code reading score in each candidate search interval includes:
- the position corresponding to the local maximum value of the code-reading score in the candidate search interval is used as the Optimal position of the lens
- the maximum value among the local maximum values of code reading scores in multiple candidate search intervals is used as the global maximum value; if the position corresponding to the global maximum value is one, the The position corresponding to the global maximum value is used as the optimal position of the lens. If there are multiple positions corresponding to the global maximum value, each of the multiple positions corresponding to the global maximum value is determined to be a candidate for itself. The distance between the rough positions corresponding to the search interval, and the position with the smallest distance between the rough positions corresponding to the candidate search interval in which it is located is taken as the best position of the lens.
- the focusing according to the best position of the lens includes:
- the lens is driven to move to the best position of the lens to complete focusing.
- a focusing device comprising:
- An acquisition module configured to acquire image clarity scores and code reading scores when the lens of the code reading camera is in multiple positions, and the code reading scores are used to characterize the code reading success rate of the code reading camera;
- a determination module configured to determine the optimal position of the lens according to the image clarity scores and code reading scores at the multiple positions
- the focusing module is used for focusing according to the optimal position of the lens.
- the determining module includes:
- a first determining module configured to determine at least one rough position of the lens from the plurality of positions according to the image sharpness scores at the plurality of positions;
- the second determination module is used to determine the optimal position of the lens according to at least one rough position of the lens and the code reading scores at the plurality of positions;
- the acquiring module is mainly used for:
- the starting point of search is the starting point or the end point of the travel of the lens
- the starting point of the travel of the lens is the target of the lens and the object to be read
- the position point where the vertical distance of the surface is the largest, and the stroke end point of the lens is the position point where the vertical distance between the lens and the target surface is the smallest;
- the first determination module is mainly used for:
- the first position is used as the rough position of the lens.
- the first determination module is mainly used for:
- the determined second position is used as at least one rough position of the lens.
- the second determination module is mainly used for:
- the optimal position of the lens is determined.
- the second determining module is mainly further configured to:
- the position corresponding to the local maximum value of the code-reading score in the candidate search interval is used as the Optimal position of the lens
- the maximum value among the local maximum values of code reading scores in multiple candidate search intervals is used as the global maximum value; if the position corresponding to the global maximum value is one, the The position corresponding to the global maximum value is used as the optimal position of the lens. If there are multiple positions corresponding to the global maximum value, each of the multiple positions corresponding to the global maximum value is determined to be a candidate for itself. The distance between the rough positions corresponding to the search interval, and the position with the smallest distance between the rough positions corresponding to the candidate search interval in which it is located is taken as the best position of the lens.
- the focusing module is mainly used for:
- the lens is driven to move to the best position of the lens to complete focusing.
- a focusing device in another aspect, the device includes a control unit, a motor, a lens and a moving mechanism;
- the moving mechanism drives the lens to move in a direction perpendicular to the mirror surface of the lens
- the control unit is connected to the motor, and the control unit is used to execute any one of the methods described above, so as to control the motor to drive the moving mechanism to drive the lens to move to complete focusing.
- a computer device in another aspect, includes a processor, a communication interface, a memory, and a communication bus, and the processor, the communication interface, and the memory complete mutual communication through the communication bus , the memory is used to store computer programs, and the processor is used to execute the programs stored in the memory, so as to realize the steps of the aforementioned focusing method.
- a computer-readable storage medium wherein a computer program is stored in the storage medium, and when the computer program is executed by a processor, the steps of the aforementioned focusing method are implemented.
- a computer program product containing instructions, which, when run on a computer, causes the computer to execute the steps of the aforementioned focusing method.
- automatic focusing is performed according to the image clarity score and the code reading score when the lens of the code reading camera is in different positions. Since the code-reading score can represent the code-reading success rate of the code-reading camera, the embodiment of the present application actually evaluates the focusing effect of the lens at different positions by combining the image clarity and code-reading effect of the code-reading camera. To determine the best position of the lens and then complete the automatic focus, in this way, not only the clarity of the image collected by the code-reading camera after the focus is completed, but also the success rate of code-reading by the code-reader camera after the focus is completed.
- FIG. 1 is a system architecture diagram involved in a focusing method provided by an embodiment of the present application
- FIG. 2 is a schematic structural diagram of a focusing device provided by an embodiment of the present application.
- Fig. 3a is a flowchart of a focusing method provided by an embodiment of the present application.
- Fig. 3b is a flow chart of another focusing method provided by the embodiment of the present application.
- Fig. 4 is a kind of image sharpness score curve chart provided by the embodiment of the present application.
- Fig. 5a is a schematic structural diagram of another focusing device provided by an embodiment of the present application.
- Fig. 5b is a schematic structural diagram of another focusing device provided by an embodiment of the present application.
- Fig. 6a is a schematic structural diagram of a computer device provided by an embodiment of the present application.
- Fig. 6b is a schematic structural diagram of another computer device provided by an embodiment of the present application.
- the focusing method provided in the embodiment of the present application can be applied to industrial scenarios involving reading of two-dimensional codes or barcodes.
- a code-reading camera is installed on the gantry above the express automatic conveyor belt, and the two-dimensional code or barcode on the upper surface of the express package is read by the code-reading camera to sort the express package.
- the code reading can be adjusted through the focusing method provided by the embodiment of the application The distance between the lens of the camera and the upper surface of the express package enables fast autofocus.
- a code-reading camera is installed on the production line in the factory, and the two-dimensional code or barcode on the product on the production line is read by the code-reading camera to realize the storage of product information.
- the distance between the code-reading camera and the surface where the product’s QR code or barcode is located may change.
- the focusing method provided in the embodiment adjusts the distance between the lens of the code-reading camera and the upper surface of the express package, thereby realizing fast automatic focusing.
- a stationary reference object can be placed under the lens of the code-reading camera, and then the code-reading camera can be controlled by the reference object. focusing. After the code-reading camera has focused, the code-reading camera is ready for use. In the subsequent process of using the code-reading camera, the object to be read passes under the lens of the code-reading camera. If the height difference between the object to be read and the aforementioned reference object for focusing is large, the The code-reading camera performs re-focusing. In this case, fast auto-focusing can be achieved through the focusing method provided by the embodiment of the present application.
- the above are only two possible application scenarios provided by the embodiment of the present application.
- the embodiment of the present application can also be used in other industrial code reading scenarios requiring auto-focus, which is not limited in the embodiment of the present application.
- FIG. 1 is a structural diagram of an image acquisition system provided by an embodiment of the present application. As shown in FIG. 1 , the image acquisition system includes a code reading camera 101 , a conveyor belt 102 and an object 103 to be read on the conveyor belt 102 .
- the code-reading camera 101 may be installed on a door frame above the conveyor belt 102 . Moreover, the lens of the code reading camera 101 faces the target surface of the object to be read 103 on the conveyor belt 102 .
- the target surface refers to the plane where the two-dimensional code or the barcode is located. In the express logistics industry, usually, the target surface is the upper surface of the express package when it is on the conveyor belt 102, that is, the surface of the express package away from the conveyor belt. Of course, in other possible scenarios, the target surface may also be the side of the object 103 to be read, which is not limited in this embodiment of the present application.
- the code reading camera 101 may start image acquisition when a trigger signal is detected. Among them, the code reading camera 101 firstly adjusts the position of the lens by moving the lens.
- adjusting the position of the lens refers to adjusting the overall position of the lens, or it may refer to adjusting the local position of the lens, but the focal length of the code reading camera should be adjusted accordingly. It changes with the adjustment of the position of the lens.
- the centroid of the lens may be adjusted while keeping the shape of the lens unchanged, and in another possible implementation manner, the lens is a T-Lens (variable focus) lens Or liquid lens, then can be under the situation that keeps the centroid position of the lens unchanged, adjust the relative position between each point inside the lens, namely change the shape of the adjusted lens.
- T-Lens variable focus
- liquid lens liquid lens
- adjusting the centroid of the lens is used as an example for illustration, and the principle of changing the shape of the lens is completely the same, and will not be repeated here.
- the distance between the lens and the target surface of the object to be read also changes, so adjusting the distance of the center of mass of the lens also means adjusting the distance between the lens and the target surface of the object to be read , whenever a distance is adjusted, the code-reading camera can collect an image of the object to be read, and calculate the image clarity score and code-reading score when the lens is in multiple positions based on the collected images, and then according to the lens in multiple positions The image clarity score and code reading score at the time, and determine the best position of the lens from the multiple positions.
- An image acquisition in this paper can be only one image or multiple images.
- the image clarity score is used to indicate the clarity of the image
- the code reading score is used to indicate the quality of the QR code and barcode in the image.
- the image clarity score is used to indicate the overall clarity of the multiple images
- the code reading score is used to indicate the overall quality of the two-dimensional codes and barcodes in the multiple images.
- the sharpness score it may be to determine the respective sharpness of the plurality of images to obtain an image sharpness score representing the mean value of the sharpness of all images, or to determine the clearest image from the multiple images. , generating an image sharpness score that represents how sharp this sharpest image is.
- the code-reading camera drives the lens to move to the optimal position for focusing by controlling the motor.
- the code reading camera 101 can capture the image of the two-dimensional code or the barcode on the target surface of the object to be read 103, and read the two-dimensional code or the barcode.
- FIG. 2 shows a schematic structural diagram of a focusing device 200 applied to a code reading camera.
- the focusing device 200 includes a control unit 201 , a motor 202 , a lens 203 and a moving mechanism 204 .
- control unit 201 is connected with the motor 202 .
- the lens 203 may be located in a moving mechanism 204, as shown in FIG. 2 .
- the lens 203 is connected to a moving mechanism 204 (not shown in FIG. 2 ).
- the motor 202 is connected to the moving mechanism 204, so that the control unit 201 can control the motor 202 to drive the moving mechanism 204 to move.
- the movement of the moving mechanism 204 drives the lens 203 to move.
- the moving mechanism 204 can drive the lens 203 to move in a direction perpendicular to the mirror surface of the lens 203, that is, to move in the direction of the optical axis of the mirror surface (hereinafter referred to as the axis of the mirror surface), and the movable stroke of the moving mechanism 204 is certain. of.
- the lens 203 will also move along an axis perpendicular to its mirror surface, and the stroke of the lens 203 is also constant.
- the control unit 201 first controls the motor 202 to drive the moving mechanism 204 to drive the lens 203 to continuously move along the specified search direction according to the specified search step size, so as to search for a rough position of the lens 203 .
- the specified search step size can be a fixed step size, or a variable step size.
- the search step size is fixed to 1 unit step size, and in another In a possible embodiment, the initial search step size is 2 unit steps, and decreases as the image definition score increases.
- the control unit 201 After obtaining the rough position of the lens 203, the control unit 201 determines the best position of the lens 203 according to the code reading scores of the lens 203 at each position determined by the lens 203 during the rough search process, and then calculates the current position of the lens 203. The moving distance from the position to the best position and the moving direction of the focus. Afterwards, the control unit 201 controls the motor 202 to drive the moving mechanism 204 to move the lens 203 along the focus moving direction to reach the best position, so as to complete the focus.
- the code reading camera also includes other necessary camera components such as an image sensor, a light filter component, and a supplementary light device, which will not be described in this embodiment of the present application.
- Fig. 3 is a flow chart of a focusing method provided by an embodiment of the present application. This method can be applied to a code-reading camera, or can be applied to an electronic device independent of the code-reading camera and capable of controlling the code-reading camera to focus. Example Specifically, it can be applied to the control unit of the code-reading camera shown in FIG. 2 . Referring to Fig. 3, this method comprises the steps:
- Step 301 Obtain the image clarity score and code reading score when the lens of the code reading camera is in multiple positions, and the code reading score is used to represent the code reading success rate of the code reading camera.
- the code-reading score can represent the code-reading success rate of the code-reading camera.
- the code reading camera after the code reading camera starts to focus, it first acquires the image clarity score and the code reading score of the lens at the starting point of the search.
- search starting point refers to that the lens starts to move from the search starting point after starting to focus.
- the search starting point may be the starting point of the travel of the lens, or the end of the travel of the lens of the code reading camera.
- the starting point of the travel of the lens is the position point where the vertical distance between the lens and the target surface of the object to be read is the largest
- the end point of the travel of the lens is the position point where the vertical distance between the lens and the target surface is the smallest. That is to say, when the moving mechanism in the code-reading camera drives the lens to move away from the target surface in a direction perpendicular to the mirror surface of the lens, and moves to the point where it cannot continue to move, the distance between the lens and the target surface is the largest. At this point, the position of the lens is the starting point of the lens' travel.
- the moving mechanism in the code-reading camera drives the lens to move toward the target surface in a direction perpendicular to the mirror surface of the lens, and moves to the point where it can no longer move, the distance between the lens and the target surface is the smallest, and , the position of the lens is the end of the lens' stroke.
- the code-reading camera adjusts the position of the lens to the starting point or the end point of the stroke before auto-focusing is required each time.
- the code-reading camera can collect images when the lens is at the starting point of the search, and use the image autocorrelation method or image high-frequency component method or other methods for calculating the image clarity score, and use the collected images at this position to calculate the lens position.
- the image clarity score at this location wherein, the higher the image definition score, the higher the clarity of the collected image will be when the image is collected when the lens is at this position.
- the code-reading camera can also calculate the code-reading score of the barcode in the image collected at the starting point of the search based on the ISO15416 standard, or calculate the QR code in the image collected at the starting point of the search according to the ISO15415 standard code reading score.
- the higher the code reading score the better the code reading effect when the image is read when the lens is in this position.
- the code reading camera can control the motor to drive the lens to move continuously from the search starting point. Every time the lens moves to a position, the code reading camera calculates an image clarity score and a code reading score while the lens is in that position.
- the code reading camera controls the motor to drive the lens to move along the specified search direction according to the specified search step from the search starting point.
- the image clarity score and code reading score are obtained once.
- the specified search direction is the direction from the starting point of the shot to the end of the shot; when the starting point of the search is the end point of the shot, the specified search direction is from the end point of the shot to The direction of the start of the trip.
- the lens of the code reading camera can be located in a moving mechanism or connected to the moving mechanism, and the movement of the moving mechanism drives the lens to move.
- the moving mechanism moves along a direction perpendicular to the mirror surface of the lens, and the movable stroke of the moving mechanism is certain.
- the lens will also move along a direction perpendicular to its own mirror surface, and the stroke of the lens is also constant.
- the lens may move according to a specified search step, and the specified search step is the distance for moving the lens once.
- the specified search step size can be manually set. Wherein, the smaller the specified search step size is, the higher the search accuracy is, and the larger the specified search step size is, the higher the search efficiency is.
- the lens when the code-reading camera starts to focus, the lens may be neither at the starting point nor at the end of the stroke, in this case, the code-reading camera can control the lens to move from the current position Go to the starting point or end point of the shot, and then move along the specified search direction according to the specified search step from the starting point or end point of the shot according to the method described above.
- the code reading camera can collect an image at that position, and use the image autocorrelation method or image high-frequency component method or other methods to calculate the image clarity score
- the method uses the collected image at the position to calculate the image sharpness score when the lens is at the position.
- the code-reading camera can also calculate the code-reading score of the barcode in the image collected at the position based on the ISO15416 standard, or calculate the reading score of the two-dimensional code in the image collected at the position according to the ISO15415 standard. code scoring.
- Step S302 Determine the best position of the lens according to the image clarity scores and code reading scores at multiple positions.
- the best position judgment method can be different according to different application scenarios, but the image clarity score and code reading score of the best position should be as high as possible, and the image clarity score of any other position is lower than the image clarity score of the best position score, or, the reading score is lower than that of the best position.
- the comprehensive score of each position is determined according to the image clarity scores and code reading scores at multiple positions, and the position with the highest comprehensive score is determined as the best position of the lens.
- the comprehensive score of each position is obtained by weighting and summing the image clarity score and the code reading score of the position. /or set by experience.
- step 302 includes two steps: step 3021 and step 3022:
- step 3021 according to the image sharpness scores at the multiple positions, determine at least one rough position of the lens from the multiple positions.
- the code reading camera can calculate the image clarity score and reading value after the lens moves once. code scoring. Based on this, the code-reading camera can obtain an image sharpness score at the moved position during the lens movement process, that is, to judge the change trend of all the acquired image clarity scores with the movement of the lens. If as the lens moves, the acquired multiple image sharpness scores show a trend of becoming larger and then smaller, and the latest acquired image clarity score is higher than the maximum value among the multiple acquired image clarity scores.
- the code reading camera further judges whether the code reading score at the first position corresponding to the maximum value among the obtained multiple image clarity scores is greater than the second threshold. If it is greater than the second threshold, the code reading camera takes the first position as the rough position of the lens.
- the code-reading camera can draw the relationship between the acquired image clarity scores and the corresponding position curve. If the image sharpness score on the relationship curve shows a trend of becoming larger first and then smaller, the code reading camera calculates the maximum value of the acquired multiple image sharpness scores and the image sharpness at the current position of the lens The difference between the scores, and then calculate the ratio of the difference to the maximum value among the plurality of image clarity scores, and the ratio is the reduction ratio. If the drop ratio is greater than the first threshold, it is determined that the drop ratio has reached the first threshold, and it is considered that the lens position with the highest image clarity has been found.
- the code-reading camera can further use the code-reading score to identify Determine the best position for the lens.
- the code-reading camera can use the first position as the rough position of the searched lens.
- the code-reading camera may stop moving the lens, that is, stop searching for a rough position of the lens.
- the first threshold is a preset ratio value, and the ratio value may be 20%, 40% or other values, which is not limited in this embodiment of the present application.
- the second threshold may be a predetermined value not greater than the minimum value of the code reading score when there is a two-dimensional code or barcode in the field of view of the code reading camera.
- the second threshold may be 0, 1, 2 and other values, This embodiment of the present application does not limit it.
- the code-reading camera can continue to drive the lens to move to obtain the image clarity score and code-reading score at the next position, and repeat the above judgment process.
- the lens may not need to move to all positions along the specified search direction, and the code reading camera can search for a rough position. In this way, the search time can be reduced and the search efficiency can be improved.
- the code-reading camera can also be clear according to the acquired images at all positions during the lens movement process after the lens moves from the start point to the end point or from the end point to the start point A degree score is used to determine at least one rough location of the shot.
- the relationship curve between the multiple positions of the code reading camera and the image clarity scores at the multiple positions during the moving process and the image clarity scores at the multiple positions can be drawn; at least A peak image clarity score; determine a second position whose corresponding code reading score is greater than a second threshold from at least one position corresponding to at least one image clarity score peak; use the determined second position as at least one rough position of the lens .
- the relationship curves between the image clarity scores at multiple positions obtained by the code reading camera drawing lens from the search start point to the search end point and the multiple positions can be drawn.
- the code reading camera obtains the peak value of the image clarity score at each peak point in the relationship curve, so as to obtain at least one peak value of the image clarity score.
- the peak value of the image clarity score may be determined from the relationship curve based on any peak-finding algorithm, which is not limited in this application.
- the code reading camera judges whether the code reading score at the position corresponding to each image clarity score peak value is greater than the second threshold, and assigns the corresponding code reading score greater than the second threshold to the position as a second position. At this time, there may be one or more determined second positions, and the code reading camera may use the determined second position as at least one rough position.
- Fig. 4 is a graph showing a relationship between multiple positions and image clarity scores at the multiple positions according to an embodiment of the present application.
- the code reading camera can obtain the peak value of each peak point, thereby obtaining 3 peak points S1, S2 and S3 of the image sharpness score.
- the code reading camera judges whether the code reading score at the position corresponding to each peak value is greater than the second threshold. Assuming that the second threshold is 0, the code reading scores at position A corresponding to S1 and position C corresponding to S3 are both greater than 0, and the code reading score at position B corresponding to S2 is equal to 0, then the code reading camera will compare position A and position C as the rough position of the lens.
- the code reading camera can obtain the maximum value of the image clarity score from at least one peak value of the image clarity score, and use the position corresponding to the maximum value of the image clarity score as the best position of the lens, and then refer to step 304 The realization of the method, focusing according to the best position of the lens.
- Step 3022 Determine the optimal position of the lens according to at least one rough position of the lens and the code reading scores at multiple positions.
- the code reading camera After obtaining at least one rough position of the lens, the code reading camera conducts a fine search for the at least one rough position and other positions near the rough position according to the obtained multiple code reading scores, so as to determine the best position of the lens.
- the code reading camera first determines the candidate search interval corresponding to each rough position, and the candidate search interval is the lens position interval centered on the corresponding rough position; the code reading corresponding to each position included in each candidate search interval In scoring, determine the local maximum value of the code reading score in each candidate search interval; determine the best position of the lens according to the local maximum value of the code reading score in each candidate search interval.
- this rough position is called the first rough position.
- the code-reading camera takes the first rough position as the center, and specifies the distance forward and backward along the specified search direction, so that A candidate search interval corresponding to the first rough position is formed. Wherein, the specified distance is greater than the specified search step size.
- the position S is centered forward L and backward L along the specified search direction to form a candidate search interval [S-L, S+L], wherein S is greater than L.
- the code reading camera After determining the candidate search interval corresponding to the first rough position, for each position where the lens moves in the candidate search interval, the code reading camera can obtain the code reading score at the corresponding position, and from the acquired The maximum value is determined in the code-reading score, and the maximum value is the local maximum value of the code-reading score in the candidate search interval. It should be noted that if there is one rough position, there is also one corresponding candidate search interval, and thus, the obtained local maximum value of the reading score is also one. Although there is one local maximum value of the code-reading score, there may be one or more positions corresponding to the local maximum value of the code-reading score.
- the reading score at one position may be the local maximum value, or there may be multiple positions at which the reading score is the same, all of which are the local maximum value of the reading score. Based on this, if there is one candidate search interval, and the position corresponding to the local maximum value of the code reading score in the candidate search interval is also one, the code reading camera can directly use the position corresponding to the local maximum value of the code reading score as the lens Best location.
- the code reading camera can select from multiple positions corresponding to the local maximum value of the code reading score in the candidate search interval , select the third position closest to the rough position corresponding to the candidate search interval, and use the third position as the best position of the shot.
- the obtained local maximum value of the code-reading score is also multiple.
- the maximum value of the code score local maximum value is obtained to obtain the global maximum value. At this time, there may be one position corresponding to the global maximum value, or there may be multiple positions.
- the position corresponding to the global maximum value is directly used as the best position of the lens, and if there are multiple positions corresponding to the global maximum value, then it is determined that the global maximum value corresponds to The distance between each position of the corresponding position and the rough position corresponding to the candidate search interval where the corresponding position is located, and the position with the smallest distance between the rough positions corresponding to the candidate search interval where the corresponding position is located is taken as the best position of the lens.
- a is located in the candidate search interval U1
- the rough position corresponding to U1 is M
- b is located in the candidate search interval U2
- U2 corresponds to
- the rough position is N
- c is located in the candidate search interval U3
- the rough position corresponding to U3 is Q
- the code reading camera calculates the distance between a and M to obtain the first distance
- the second distance calculates the distance between c and Q to obtain the third distance. Compare the size of the first distance, the second distance and the third distance, assuming that the first distance is the smallest, then take a as the best position of the lens.
- the code reading camera can acquire the image sharpness score of the lens at the search starting point when the lens is at the search starting point, and subsequently, starting from the search starting point, each time the code reading camera The lens moves once according to the specified search step, and the code reading camera can obtain the image clarity score at the position where the lens moves.
- the specified search step used here is referred to as the first search step.
- the code-reading camera can obtain an image sharpness score at the moved position during the movement of the lens, that is, judge the change trend of all the acquired image sharpness scores with the movement of the lens. If as the lens moves, the acquired multiple image sharpness scores show a trend of becoming larger and then smaller, and the latest acquired image clarity score is higher than the maximum value among the multiple acquired image clarity scores. If the decrease ratio of the value reaches the first threshold, the position corresponding to the maximum value among the obtained multiple image clarity scores is used as the rough position searched.
- the code-reading camera can refer to the methods described in the aforementioned steps 3021 and 3022, and obtain the image clarity scores at all positions during the process of moving the lens from the start point to the end point or from the end point to the start point. wherein at least one peak value of the image sharpness score is obtained, and at least one position corresponding to the at least one peak value of the image sharpness is used as at least one rough position.
- the code reading camera can refer to the method introduced in this step to determine the candidate search interval corresponding to each rough position. After that, for the candidate search interval corresponding to each rough position, the code-reading camera can control the lens to start from one end point of the candidate search interval, and move to the other end point of the candidate search interval according to the second search step, wherein, the second The search step size is smaller than the aforementioned first search step size used when performing a rough position search.
- the code reading camera can collect an image, and calculate the code reading score of the lens at the moved position according to the collected image. In this way, the code-reading camera can obtain the code-reading scores at multiple positions where the lens is in each candidate search interval.
- the code-reading camera can determine the local maximum value of the code-reading score in each candidate search interval according to the code-reading scores at multiple positions in each candidate search interval, and then refer to the method introduced in this step, according to each The local maximum value of the code reading score in the candidate search interval determines the best position of the lens, which will not be repeated here.
- Step 303 Focusing according to the best position of the lens.
- the code-reading camera can calculate the distance difference between the current position of the lens and the optimal position of the lens, and obtain the moving distance that the lens needs to move. At the same time, the code-reading camera can also determine whether the direction from the current position of the lens to the best position of the lens is the direction from the start of the stroke to the end of the stroke, or the direction from the end of the stroke to the start of the stroke, and the determined direction As the camera's subsequent movement direction.
- the code-reading camera controls the motor to drive the lens to move the moving distance along the moving direction to reach the best position of the lens to complete the focusing.
- automatic focusing is performed according to the image clarity score and the code reading score when the lens of the code reading camera is in different positions. Since the code-reading score can represent the code-reading success rate of the code-reading camera, the embodiment of the present application actually evaluates the focusing effect of the lens at different positions by combining the image clarity and code-reading effect of the code-reading camera. To determine the best position of the lens and then complete the automatic focus, in this way, not only the clarity of the image collected by the code-reading camera after the focus is completed, but also the success rate of code-reading by the code-reader camera after the focus is completed.
- the positions corresponding to the peak value of the image clarity score may be filtered out without code-reading scores, so as to obtain at least one rough position.
- at least one rough position obtained is a position where a two-dimensional code or a barcode exists in the field of view, which avoids the influence of a false image sharpness peak and improves the focusing success rate.
- Fig. 5a is a schematic structural diagram of a focusing device 500 provided by an embodiment of the present application.
- the focusing device 500 may be implemented as part or all of a code-reading camera by software, hardware or a combination of the two.
- the apparatus 500 includes: an acquiring module 501 , a determining module 502 and a focusing module 503 .
- the obtaining module 501 is used to obtain the image clarity score and code reading score when the lens of the code reading camera is in multiple positions, and the code reading score is used to represent the code reading success rate of the code reading camera;
- a determination module 502 configured to determine the optimal position of the lens according to the image clarity scores at multiple positions
- a focus module 503, configured to focus according to the best position of the lens.
- the determination module 502 includes:
- the first determination module 5021 is configured to determine at least one rough position of the lens from multiple positions according to the image sharpness scores at the multiple positions;
- the second determination module 5022 is used to determine the optimal position of the lens according to at least one rough position of the lens and the code reading scores at multiple positions;
- the acquiring module 501 is mainly used for:
- the starting point of the search is the starting point or end point of the travel of the lens.
- the starting point of the travel of the lens is the position where the vertical distance between the lens and the target surface of the object to be read is the largest. point, the end point of the travel of the lens is the point where the vertical distance between the lens and the target surface is the smallest;
- the specified search direction refers to the direction from the starting
- the specified search direction refers to the direction from the end of the travel of the camera to the start of the travel
- the first determining module 5021 is mainly used for:
- the sharpness scores of multiple acquired images tend to increase first and then become smaller, and the latest acquired image sharpness score is higher than the multiple If the descending ratio of the maximum value in a plurality of image clarity scores reaches the first threshold, it is judged whether the code reading score at the first position corresponding to the maximum value in multiple image clarity scores is greater than the second threshold;
- the first position is taken as the rough position of the lens.
- the first determining module 5021 is mainly used for:
- the determined second position is used as at least one rough position of the lens.
- the second determining module 5022 is mainly used for:
- the optimal position of the shot is determined.
- the second determination module 5022 is also mainly used to:
- the position corresponding to the local maximum value of the code reading score in the candidate search interval is one, then the position corresponding to the local maximum value of the code reading score in the candidate search interval is taken as the best position of the shot;
- the candidate search interval If there is one candidate search interval, and there are multiple positions corresponding to the local maximum value of the code-reading score in the candidate search interval, then from the multiple positions corresponding to the local maximum value of the code-reading score in the candidate search interval, select the candidate search interval.
- the rough position corresponding to the interval is the third position closest to the distance, and the third position is taken as the best position of the lens;
- the maximum value among the local maximum values of reading scores in multiple candidate search intervals will be used as the global maximum value; if the position corresponding to the global maximum value is one, then the position corresponding to the global maximum value will be As the optimal position of the lens, if there are multiple positions corresponding to the global maximum, determine the distance between each of the multiple positions corresponding to the global maximum and the rough position corresponding to the candidate search interval where it is located, and The position with the smallest distance between the rough positions corresponding to the candidate search interval where it is located is taken as the optimal position of the shot.
- the focusing module 503 is mainly used for:
- the optimal position of the lens and the current position of the lens determine the focus moving direction and moving distance of the lens
- the lens is driven to move to the best position of the lens to complete the focus.
- automatic focusing is performed according to the image clarity score and the code reading score when the lens of the code reading camera is in different positions. Since the code-reading score can represent the code-reading success rate of the code-reading camera, the embodiment of the present application actually evaluates the focusing effect of the lens at different positions by combining the image clarity and code-reading effect of the code-reading camera. To determine the best position of the lens and then complete the automatic focus, in this way, not only the clarity of the image collected by the code-reading camera after the focus is completed, but also the success rate of code-reading by the code-reader camera after the focus is completed.
- the focusing device provided in the above-mentioned embodiment performs focusing
- the division of the above-mentioned functional modules is used as an example for illustration.
- the internal structure of the system is divided into different functional modules to complete all or part of the functions described above.
- the focusing device provided by the above-mentioned embodiment and the focusing method embodiment belong to the same idea, and the specific implementation process thereof is detailed in the method embodiment, and will not be repeated here.
- Fig. 6a is a schematic structural diagram of a computer device provided by an embodiment of the present application.
- the code-reading camera in the foregoing embodiments can be realized by the computer device.
- the computer device 600 includes: a processor 601 and a memory 602 .
- the processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like.
- Processor 601 can adopt at least one hardware form in DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, programmable logic array) accomplish.
- Processor 601 may also include a main processor and a coprocessor, and the main processor is a processor for processing data in a wake-up state, also known as a CPU (Central Processing Unit, central processing unit); the coprocessor is Low-power processor for processing data in standby state.
- CPU Central Processing Unit, central processing unit
- the coprocessor is Low-power processor for processing data in standby state.
- the processor 601 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used for rendering and drawing the content that needs to be displayed on the display screen.
- the processor 601 may also include an AI (Artificial Intelligence, artificial intelligence) processor, where the AI processor is used to process computing operations related to machine learning. It should be noted that the control unit in the focusing device 200 shown in FIG. 2 may be realized by the processor 601 .
- Memory 602 may include one or more computer-readable storage media, which may be non-transitory.
- the memory 602 may also include high-speed random access memory and non-volatile memory, such as one or more magnetic disk storage devices and flash memory storage devices.
- the non-transitory computer-readable storage medium in the memory 602 is used to store at least one instruction, and the at least one instruction is used to be executed by the processor 601 to implement the focusing method provided by the method embodiment in this application .
- the computer device 600 may optionally further include: a peripheral device interface 603 and at least one peripheral device.
- the processor 601, the memory 602, and the peripheral device interface 603 may be connected through buses or signal lines.
- Each peripheral device can be connected to the peripheral device interface 603 through a bus, a signal line or a circuit board.
- the peripheral device includes: at least one of a radio frequency circuit 604 , a display screen 605 , a camera component 606 , an audio circuit 607 , a positioning component 608 and a power supply 609 .
- the peripheral device interface 603 may be used to connect at least one peripheral device related to I/O (Input/Output, input/output) to the processor 601 and the memory 602.
- the processor 601, memory 602 and peripheral device interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 601, memory 602 and peripheral device interface 603 or The two can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
- the radio frequency circuit 604 is used to receive and transmit RF (Radio Frequency, radio frequency) signals, also called electromagnetic signals.
- the radio frequency circuit 604 communicates with the communication network and other communication devices through electromagnetic signals.
- the radio frequency circuit 604 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals.
- the radio frequency circuit 604 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and the like.
- the radio frequency circuit 604 can communicate with other terminals through at least one wireless communication protocol.
- the wireless communication protocol includes but is not limited to: metropolitan area network, mobile communication networks of various generations (2G, 3G, 4G and 5G), wireless local area network and/or WiFi (Wireless Fidelity, wireless fidelity) network.
- the radio frequency circuit 604 may also include circuits related to NFC (Near Field Communication, short distance wireless communication), which is not limited in this application.
- the display screen 605 is used to display a UI (User Interface, user interface).
- the UI can include graphics, text, icons, video, and any combination thereof.
- the display screen 605 also has the ability to collect touch signals on or above the surface of the display screen 605 .
- the touch signal can be input to the processor 601 as a control signal for processing.
- the display screen 605 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
- the display screen 605 there may be one display screen 605, which is arranged on the front panel of the computer device 600; in other embodiments, there may be at least two display screens 605, which are respectively arranged on different surfaces of the computer device 600 or in a folding design
- the display screen 605 may be a flexible display screen, which is arranged on the curved surface or the folded surface of the computer device 600 . Even, the display screen 605 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
- the display screen 605 can be made of LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, organic light-emitting diode) and other materials.
- the camera assembly 606 is used to capture images or videos.
- the camera assembly 606 may include the aforementioned focusing device 200 shown in FIG. 2 .
- the camera assembly may also include an image sensor, a filter assembly, a supplementary light device, and the like.
- the image sensor is used to generate and output an image signal through exposure.
- the filter assembly is used to filter the visible light entering the camera assembly in a specific wavelength band.
- the supplementary light device is used for supplementary light during the exposure process of the image sensor.
- Audio circuitry 607 may include a microphone and speakers.
- the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 601 for processing, or input them to the radio frequency circuit 604 to realize voice communication.
- the microphone can also be an array microphone or an omnidirectional collection microphone.
- the speaker is used to convert the electrical signal from the processor 601 or the radio frequency circuit 604 into sound waves.
- the loudspeaker can be a conventional membrane loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, it is possible not only to convert electrical signals into sound waves audible to humans, but also to convert electrical signals into sound waves inaudible to humans for purposes such as distance measurement.
- the positioning component 608 is used to locate the current geographic location of the computer device 600, so as to realize navigation or LBS (Location Based Service, location-based service).
- the positioning component 608 may be a positioning component based on the GPS (Global Positioning System, Global Positioning System) of the United States, the Beidou system of China, the Grenax system of Russia or the Galileo system of the European Union.
- the power supply 609 is used to supply power to various components in the computer device 600 .
- the power source 609 can be alternating current, direct current, disposable batteries or rechargeable batteries.
- the rechargeable battery may support wired charging or wireless charging.
- the rechargeable battery can also be used to support fast charging technology.
- Fig. 6b does not constitute a limitation to the computer device 600, and may include more or less components than shown in the figure, or combine certain components, or adopt different component arrangements.
- the embodiment of the present application also provides a non-transitory computer-readable storage medium.
- the computer device can execute the focusing method provided in the above embodiment.
- the computer readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
- the computer-readable storage medium mentioned in the embodiment of the present application may be a non-volatile storage medium, in other words, may be a non-transitory storage medium.
- the embodiment of the present application also provides a computer program product including instructions, which, when running on a computer device, causes the computer device to execute the focusing method provided in the foregoing embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Toxicology (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Biomedical Technology (AREA)
- Automatic Focus Adjustment (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
Abstract
Description
Claims (12)
- 一种对焦方法,其特征在于,所述方法包括:获取读码相机的镜头处于多个位置时的图像清晰度评分和读码评分,所述读码评分用于表征所述读码相机的读码成功率;根据所述多个位置的图像清晰度评分和读码评分,确定所述镜头的最佳位置;根据所述镜头的最佳位置进行对焦。
- 根据权利要求1所述的方法,其特征在于,所述根据所述多个位置的图像清晰度评分和读码评分,确定所述镜头的最佳位置,包括:根据所述多个位置处的图像清晰度评分,从所述多个位置中确定所述镜头的至少一个粗略位置;根据所述镜头的至少一个粗略位置和所述多个位置处的读码评分,确定所述镜头的最佳位置;根据所述镜头的最佳位置进行对焦。
- 根据权利要求2所述的方法,其特征在于,所述获取读码相机的镜头处于多个位置时的图像清晰度评分和读码评分,包括:获取所述镜头在搜索起点处的图像清晰度评分和读码评分,所述搜索起点为所述镜头的行程起点或行程终点,所述镜头的行程起点为所述镜头与待读码物体的目标表面的垂直距离最大时所处的位置点,所述镜头的行程终点为所述镜头与所述目标表面的垂直距离最小时所处的位置点;控制所述镜头从所述搜索起点起按照指定搜索步长沿着指定搜索方向移动,其中,当所述搜索起点为所述镜头的行程起点时,所述指定搜索方向是指从所述镜头的行程起点到行程终点的方向,当所述搜索起点为所述镜头的行程终点时,所述指定搜索方向是指从所述镜头的行程终点到行程起点的方向;每当所述镜头移动所述指定搜索步长后,获取所述镜头移动后的图像清晰度评分和读码评分。
- 根据权利要求3所述的方法,其特征在于,所述根据所述多个位置处的图像清晰度评分,从所述多个位置中确定所述镜头的至少一个粗略位置,包括:在所述镜头沿着所述指定搜索方向移动的过程中,如果随着所述镜头的移动,已获取的多个图像清晰度评分呈先变大后变小的变化趋势,且最近一次获取的图像清晰度评分相对于所述多个图像清晰度评分中的最大值的下降比例达到第一阈值,则判断所述多个图像清晰度评分中的最大值对应的第一位置处的读码评分是否大于第二阈值;如果所述第一位置处的读码评分大于所述第二阈值,则将所述第一位置作为所述镜头的粗略位置。
- 根据权利要求2所述的方法,其特征在于,所述根据所述多个位置处的图像清晰度评分,从所述多个位置中确定所述镜头的至少一个粗略位置,包括:绘制所述多个位置与所述多个位置处的图像清晰度评分的关系曲线;从所述关系曲线中获取至少一个图像清晰度评分峰值;从所述至少一个图像清晰度评分峰值对应的至少一个位置中确定对应的读码评分大于第二阈值的第二位置;将确定出的第二位置作为所述镜头的至少一个粗略位置。
- 根据权利要求2所述的方法,其特征在于,所述根据所述镜头的至少一个粗略位置和所述多个位置处的读码评分,确定所述镜头的最佳位置,包括:确定每个粗略位置对应的候选搜索区间,所述候选搜索区间为以相应的粗略位置为中心的镜头位置区间;从每个候选搜索区间包含的各个位置对应的读码评分中,确定每个候选搜索区间内的读码评分局部最大值;根据每个候选搜索区间内的读码评分局部最大值,确定所述镜头的最佳位置。
- 根据权利要求6所述的方法,其特征在于,所述根据每个候选搜索区间内的读码评分局部最大值,确定所述镜头的最佳位置,包括:如果所述候选搜索区间为一个,且所述候选搜索区间内的读码评分局部最大值对应的位置为一个,则将所述候选搜索区间内的读码评分局部最大值对应的位置作为所述镜头的最佳位置;如果所述候选搜索区间为一个,且所述候选搜索区间内的读码评分局部最大值对应的位置为多个,则从所述候选搜索区间内的读码评分局部最大值对应的多个位置中,选择与所述候选搜索区间对应的粗略位置距离最近的第三位置,将所述第三位置作为所述镜头的最佳位置;如果所述候选搜索区间为多个,则将多个候选搜索区间内的读码评分局部最大值中的最大值作为全局最大值,如果所述全局最大值对应的位置为一个,则将所述全局最大值对应的位置作为所述镜头的最佳位置,如果所述全局最大值对应的位置有多个,则确定所述全局最大值对应的多个位置中的每个位置与自身所在的候选搜索区间对应的粗略位置之间的距离,并将与自身所在的候选搜索区间对应的粗略位置之间的距离最小的位置作为所述镜头的最佳位置。
- 根据权利要求1所述的方法,其特征在于,所述根据所述镜头的最佳位置进行对焦,包括:根据所述镜头的最佳位置和所述镜头的当前位置,确定所述镜头的对焦移动方向和移动距离;根据所述镜头的对焦移动方向和移动距离,驱动所述镜头移动至所述镜头的最佳位置以完成对焦。
- 一种对焦装置,其特征在于,所述装置包括:获取模块,用于获取读码相机的镜头处于多个位置时的图像清晰度评分和读码评分,所述读码评分用于表征所述读码相机的读码成功率;确定模块,用于根据所述多个位置处的图像清晰度评分和读码评分,确定所述镜头的最佳位置;对焦模块,用于根据所述镜头的最佳位置进行对焦。
- 根据权利要求9所述的装置,其特征在于,所述确定模块,包括:第一确定模块,用于根据所述多个位置处的图像清晰度评分,从所述多个位置中确定所述镜头的至少一个粗略位置;第二确定模块,用于根据所述镜头的至少一个粗略位置和所述多个位置处的读码评分,确定所述镜头的最佳位置;获取模块主要用于:获取所述镜头在搜索起点处的图像清晰度评分和读码评分,所述搜索起点为所述镜头的行程起点或行程终点,所述镜头的行程起点为所述镜头与待读码物体的目标表面的垂直距离最远时所处的位置点,所述镜头的行程终点为所述镜头与所述目标表面的垂直距离最近时所处的位置点;控制所述镜头从所述搜索起点起按照指定搜索步长沿着指定搜索方向移动,其中,当所述搜索起点为所述镜头的行程起点时,所述指定搜索方向是指从所述镜头的行程起点到行程终点的方向,当所述搜索起点为所述镜头的行程终点时,所述指定搜索方向是指从所述镜头的行程终点到行程起点的方向;每当所述镜头移动所述指定搜索步长后,获取所述镜头移动后的图像清晰度评分和读码评分;所述第一确定模块主要用于:在所述镜头沿着所述指定搜索方向移动的过程中,如果随着所述镜头的移动,已获取的所述多个图像清晰度评分呈先变大后变小的变化趋势,且最近一次获取的图像清晰度评分相对于所述多个图像清晰度评分中的最大值的下降比例达到第一阈值,则判断所述多个图像清晰度评分中的最大值对应的第一位置是否的读码评分是否大于第二阈值;如果所述第一位置的读码评分大于第二阈值,则将所述第一位置作为所述镜头的粗略位置;或者,所述第一确定模块主要用于:绘制所述多个位置与所述多个位置处的图像清晰度评分的关系曲线;从所述图像清晰度评分的关系曲线中获取至少一个图像清晰度评分峰值;从所述至少一个图像清晰度评分峰值对应的至少一个位置中确定对应的读码评分大于第二阈值的第二位置;将确定出的第二位置作为所述镜头的至少一个粗略位置;所述第二确定模块主要用于:确定每个粗略位置对应的候选搜索区间,所述候选搜索区间为以相应的粗略位置为中心的镜头位置区间;从每个候选搜索区间包含的各个位置对应的读码评分中,确定每个候选搜索区间内的读码评分局部最大值;根据确定的读码评分局部最大值,确定所述镜头的最佳位置;所述第二确定模块主要还用于:如果所述候选搜索区间为一个,且所述候选搜索区间内的读码评分局部最大值对应的位置为一个,则将所述候选搜索区间内的读码评分局部最大值对应的位置作为所述镜头的最佳位置;如果所述候选搜索区间内为一个,且所述候选搜索区间内的读码评分局部最大值对应的位置为多个,则从所述候选搜索区间内的读码评分局部最大值对应的多个位置中,选择与所述候选搜索区间对应的粗略位置距离最近的第三位置,将所述第三位置作为所述镜头的最佳位置;如果所述候选搜索区间为多个,则将多个候选搜索区间内的读码评分局部最大值中的最大值作为全局最大值,如果所述全局最大值对应的位置为一个,则将所述全局最大值对应的位置作为所述镜头的最佳位置,如果所述全局最大值对应的位置有多个,则确定所述全局最大值对应的多个位置中的每个位置与自身所在的候选搜 索区间对应的粗略位置之间的距离,并将与自身所在的候选搜索区间对应的粗略位置之间的距离最小的位置作为所述镜头的最佳位置;所述对焦模块主要用于:根据所述镜头的最佳位置和所述镜头的当前位置,确定所述镜头的对焦移动方向和移动距离;根据所述镜头的对焦移动方向和移动距离,驱动所述镜头移动至所述镜头的最佳位置以完成对焦。
- 一种对焦装置,其特征在于,所述对焦装置包括控制单元、电机、镜头和移动机构;其中,所述移动机构带动所述镜头在垂直于所述镜头的镜面的方向上移动;所述控制单元与所述电机连接,所述控制单元用于执行权利要求1-8中任一项所述的方法,以控制所述电机驱动所述移动机构带动所述镜头移动完成对焦。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-8任一所述方法的步骤。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237044950A KR20240013880A (ko) | 2021-07-08 | 2022-07-07 | 포커싱 방법, 디바이스 및 저장 매체 |
JP2023580583A JP2024525462A (ja) | 2021-07-08 | 2022-07-07 | 合焦方法、装置及び記憶媒体 |
EP22837030.0A EP4369247A1 (en) | 2021-07-08 | 2022-07-07 | Focusing method and apparatus, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110771982.4A CN113364986B (zh) | 2021-07-08 | 2021-07-08 | 对焦方法、装置及存储介质 |
CN202110771982.4 | 2021-07-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023280285A1 true WO2023280285A1 (zh) | 2023-01-12 |
Family
ID=77538921
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/104462 WO2023280285A1 (zh) | 2021-07-08 | 2022-07-07 | 对焦方法、装置及存储介质 |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4369247A1 (zh) |
JP (1) | JP2024525462A (zh) |
KR (1) | KR20240013880A (zh) |
CN (1) | CN113364986B (zh) |
WO (1) | WO2023280285A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320749A (zh) * | 2023-05-23 | 2023-06-23 | 无锡前诺德半导体有限公司 | 摄像头的控制方法、摄像系统、移动终端和存储介质 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113364986B (zh) * | 2021-07-08 | 2022-08-09 | 杭州海康机器人技术有限公司 | 对焦方法、装置及存储介质 |
CN114169479A (zh) * | 2021-12-31 | 2022-03-11 | 深圳市前海研祥亚太电子装备技术有限公司 | 一种智能读码器 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202838362U (zh) * | 2012-08-07 | 2013-03-27 | 重庆春涵科技发展有限公司 | 二维码识别系统 |
CN105578029A (zh) * | 2015-09-01 | 2016-05-11 | 闽南师范大学 | 一种多尺度变步长的自动对焦搜索算法据传输装置和方法 |
CN106056027A (zh) * | 2016-05-25 | 2016-10-26 | 努比亚技术有限公司 | 一种实现远距离扫描二维码的终端、系统和方法 |
CN107358234A (zh) * | 2017-07-17 | 2017-11-17 | 上海青橙实业有限公司 | 识别码的识别方法及装置 |
CN109214225A (zh) * | 2018-07-04 | 2019-01-15 | 青岛海信移动通信技术股份有限公司 | 一种图形条码的扫描方法、装置、移动终端和存储介质 |
CN109670362A (zh) * | 2017-10-16 | 2019-04-23 | 上海商米科技有限公司 | 条码扫描方法和装置 |
US20200034591A1 (en) * | 2018-07-24 | 2020-01-30 | Cognex Corporation | System and method for auto-focusing a vision system camera on barcodes |
CN111432125A (zh) * | 2020-03-31 | 2020-07-17 | 合肥英睿系统技术有限公司 | 一种对焦方法、装置及电子设备和存储介质 |
CN113364986A (zh) * | 2021-07-08 | 2021-09-07 | 杭州海康机器人技术有限公司 | 对焦方法、装置及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7769219B2 (en) * | 2006-12-11 | 2010-08-03 | Cytyc Corporation | Method for assessing image focus quality |
US8636212B2 (en) * | 2011-08-24 | 2014-01-28 | Metrologic Instruments, Inc. | Decodable indicia reading terminal with indicia analysis functionality |
JP6326759B2 (ja) * | 2012-11-30 | 2018-05-23 | 株式会社リコー | 画像記録システム、画像書き換えシステム及び画像記録方法 |
CN212552294U (zh) * | 2020-07-03 | 2021-02-19 | 孙杰 | 一种两用式激光直接标识工作站 |
CN112001200A (zh) * | 2020-09-01 | 2020-11-27 | 杭州海康威视数字技术股份有限公司 | 识别码识别方法、装置、设备、存储介质和系统 |
-
2021
- 2021-07-08 CN CN202110771982.4A patent/CN113364986B/zh active Active
-
2022
- 2022-07-07 KR KR1020237044950A patent/KR20240013880A/ko unknown
- 2022-07-07 WO PCT/CN2022/104462 patent/WO2023280285A1/zh active Application Filing
- 2022-07-07 JP JP2023580583A patent/JP2024525462A/ja active Pending
- 2022-07-07 EP EP22837030.0A patent/EP4369247A1/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202838362U (zh) * | 2012-08-07 | 2013-03-27 | 重庆春涵科技发展有限公司 | 二维码识别系统 |
CN105578029A (zh) * | 2015-09-01 | 2016-05-11 | 闽南师范大学 | 一种多尺度变步长的自动对焦搜索算法据传输装置和方法 |
CN106056027A (zh) * | 2016-05-25 | 2016-10-26 | 努比亚技术有限公司 | 一种实现远距离扫描二维码的终端、系统和方法 |
CN107358234A (zh) * | 2017-07-17 | 2017-11-17 | 上海青橙实业有限公司 | 识别码的识别方法及装置 |
CN109670362A (zh) * | 2017-10-16 | 2019-04-23 | 上海商米科技有限公司 | 条码扫描方法和装置 |
CN109214225A (zh) * | 2018-07-04 | 2019-01-15 | 青岛海信移动通信技术股份有限公司 | 一种图形条码的扫描方法、装置、移动终端和存储介质 |
US20200034591A1 (en) * | 2018-07-24 | 2020-01-30 | Cognex Corporation | System and method for auto-focusing a vision system camera on barcodes |
CN111432125A (zh) * | 2020-03-31 | 2020-07-17 | 合肥英睿系统技术有限公司 | 一种对焦方法、装置及电子设备和存储介质 |
CN113364986A (zh) * | 2021-07-08 | 2021-09-07 | 杭州海康机器人技术有限公司 | 对焦方法、装置及存储介质 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320749A (zh) * | 2023-05-23 | 2023-06-23 | 无锡前诺德半导体有限公司 | 摄像头的控制方法、摄像系统、移动终端和存储介质 |
CN116320749B (zh) * | 2023-05-23 | 2023-08-11 | 无锡前诺德半导体有限公司 | 摄像头的控制方法、摄像系统、移动终端和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113364986A (zh) | 2021-09-07 |
JP2024525462A (ja) | 2024-07-12 |
EP4369247A1 (en) | 2024-05-15 |
CN113364986B (zh) | 2022-08-09 |
KR20240013880A (ko) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023280285A1 (zh) | 对焦方法、装置及存储介质 | |
KR102324921B1 (ko) | 매크로 이미징 방법 및 단말기 | |
EP3828769B1 (en) | Image processing method and apparatus, terminal and computer-readable storage medium | |
US10877353B2 (en) | Continuous autofocus mechanisms for image capturing devices | |
US10701260B2 (en) | Focusing method, apparatus, computer readable storage medium and terminal | |
KR20190014638A (ko) | 전자 기기 및 전자 기기의 제어 방법 | |
CN103871051A (zh) | 图像处理方法、装置和电子设备 | |
CN109254564A (zh) | 物品搬运方法、装置、终端及计算机可读存储介质 | |
KR20140140855A (ko) | 촬영 장치의 자동 초점 조절 방법 및 장치 | |
CN107613208B (zh) | 一种对焦区域的调节方法及终端、计算机存储介质 | |
CN105323491B (zh) | 图像拍摄方法及装置 | |
CN102595044A (zh) | 具有防手震照相功能的电子装置及其照相方法 | |
US10250795B2 (en) | Identifying a focus point in a scene utilizing a plurality of cameras | |
WO2005032371A1 (ja) | 目画像撮像装置 | |
CN113012211B (zh) | 图像采集方法、装置、系统、计算机设备及存储介质 | |
US20170187950A1 (en) | Imaging device and focus control method | |
CN206962934U (zh) | 用于终端设备的拍摄组件 | |
CN110891132B (zh) | 包括折射率层的设备、相机模块和图像传感器封装件 | |
CN110602381B (zh) | 景深检测方法、装置、存储介质及终端 | |
KR20150133597A (ko) | 이동 단말기 및 이의 제어 방법 | |
US20190056574A1 (en) | Camera, and image display apparatus including the same | |
CN114299997A (zh) | 音频数据处理方法、装置、电子设备、存储介质及产品 | |
JP2009251679A (ja) | 撮像装置及び画像認識方法 | |
CN117812235A (zh) | 投影仪的焦距调整方法、装置、设备及存储介质 | |
WO2020073663A1 (zh) | 一种应用于终端设备的对焦方法、装置和终端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20237044950 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020237044950 Country of ref document: KR |
|
ENP | Entry into the national phase |
Ref document number: 2023580583 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022837030 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202400115R Country of ref document: SG |