CN117082344A - Image shooting method and device - Google Patents

Image shooting method and device Download PDF

Info

Publication number
CN117082344A
CN117082344A CN202311016196.9A CN202311016196A CN117082344A CN 117082344 A CN117082344 A CN 117082344A CN 202311016196 A CN202311016196 A CN 202311016196A CN 117082344 A CN117082344 A CN 117082344A
Authority
CN
China
Prior art keywords
image
scaling
target lens
area
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311016196.9A
Other languages
Chinese (zh)
Inventor
钱烽
罗涛
陈琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ant Blockchain Technology Shanghai Co Ltd filed Critical Ant Blockchain Technology Shanghai Co Ltd
Priority to CN202311016196.9A priority Critical patent/CN117082344A/en
Publication of CN117082344A publication Critical patent/CN117082344A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

An image photographing method and apparatus, the method being performed by a terminal device configured with a target lens. The method comprises the following steps: acquiring a first scaling ratio, wherein the first scaling ratio is a first ratio of a first image size of a first object to a second image size of the first object, the second image size being an image size of an image of the first object taken with a target lens at a first shooting distance, the first shooting distance being within a depth of field of the target lens; shooting a first object by using a target lens to obtain a shooting image; the captured image is scaled and displayed based on the first scaling.

Description

Image shooting method and device
Technical Field
The embodiment of the specification belongs to the technical field of image processing, and particularly relates to an image shooting method and device.
Background
In some technical scenarios, a user is required to take a picture of a selected object using a camera configured by a terminal device that the user holds, so as to obtain a corresponding taken image, so as to support implementation of a predetermined transaction by using the taken image. For example, in a technical scenario involving anti-counterfeiting verification of a product, a user is required to shoot a specific mark on an outer package of the product to be verified, and a server side performs identification processing on an image of the specific mark, so that the authenticity of a corresponding product is judged based on the image of the specific mark.
A new solution is desired to facilitate guiding a user to take high quality images of a selected subject.
Disclosure of Invention
The invention aims to provide an image shooting method and device.
In a first aspect, there is provided an image capturing method performed by a terminal device configured with a target lens, the method comprising: acquiring a first scaling, wherein the first scaling is a first ratio of a first image size of a first object to a second image size of the first object, the second image size being an image size of an image of the first object shot by the target lens at a first shooting distance, and the first shooting distance being within a depth of field of the target lens; shooting the first object by using the target lens to obtain a shooting image; and scaling and displaying the shooting image based on the first scaling scale.
In a second aspect, there is provided an image capturing apparatus disposed in a terminal device configured with a target lens, the apparatus comprising: a scale determination unit configured to acquire a first scale, wherein the first scale is a first ratio of a first image size of a desired first object to a second image size, the second image size being an image size of an image of the first object photographed with the target lens at a first photographing distance, the first photographing distance being within a depth of field range of the target lens; an image acquisition unit configured to acquire a captured image by capturing the first object with the target lens; and a zoom display unit configured to perform zoom display on the photographed image based on the first zoom scale.
In a third aspect, there is provided a computer readable storage medium having stored thereon a computer program/instruction which, when executed in a terminal device, performs the method described in the first aspect.
In a fourth aspect, there is provided a terminal device comprising a target lens, a memory, and a processor, the memory storing executable code/instructions, the processor implementing the method described in the first aspect when executing the executable code/instructions.
In the technical solution provided in the embodiments of the present disclosure, when a target lens in a terminal device is required to be used for photographing a first object, a first scaling factor is dynamically configured based on a first image size of the first object that is expected and a second image size of an image of the first object that is photographed by the target lens at a first photographing distance, where the first photographing distance is within a depth of field of the target lens; after a first object is shot through a target lens to obtain a corresponding shot image, the shot image is zoomed and displayed according to a first dynamically configured zoom scale. Therefore, the method is more beneficial to guiding the user to adjust the pose of the target lens according to the image display effect of the first object, so that the target lens shoots the first object near the first shooting distance, and a clearer and better-effect image is shot for the first object.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image capturing method provided in an embodiment of the present specification;
FIG. 2 is a schematic diagram of an exemplary provided process for obtaining a first scale;
FIG. 3 is a schematic diagram of exemplary provided relationship between the field of view of a target lens and a captured image;
FIG. 4 is a diagram illustrating a second relationship between the field of view of an exemplary provided target lens and a captured image;
FIG. 5 is a schematic diagram illustrating an exemplary process for scaling a captured image;
FIG. 6 is a schematic diagram of an exemplary provided relationship of a captured image to a preview image;
FIG. 7 is a schematic diagram of an exemplary provided preview image versus preview interface;
FIG. 8 is a schematic diagram of an exemplary provided image selection for implementing a predetermined transaction and assisting in adjusting a target lens pose;
Fig. 9 is a schematic structural diagram of an image capturing device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the technical solution in the present specification better understood by those skilled in the art, the technical solution in the embodiments of the present specification will be clearly and completely described in the following with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Cameras in terminal devices are typically configured with one or more lenses (cameras) having a fixed focal length. When a lens with a fixed focal length is adopted to shoot a certain selected object, the corresponding acquired shooting image comprises the image of the selected object. The sharpness/presentation effect of the image of the selected object in the photographed image is determined by the frame ratio of the image of the selected object in the photographed image, and the frame ratio is determined by the closest focusing distance and the focal length of the lens.
When a certain lens is adopted to shoot a selected object, if the selected object is positioned in the focus range/depth range of the lens, the selected object can be clearly presented in a corresponding acquired shooting image; if the selected object is located outside the focal range of the lens, the image of the selected object may become blurred or blurred in the corresponding captured image, i.e., the selected object may not be clearly presented in the corresponding captured image. In other words, the aforementioned focal range refers to a depth of field (i.e. a camera depth of field) corresponding to a lens for photographing a selected object, wherein the camera depth of field includes a front depth of field and a rear depth of field; the front depth of field corresponds to the closest focus distance of the lens, and the rear depth of field corresponds to the farthest focus distance of the lens. In general, the furthest focusing distance of a lens is relatively large, so when a lens is used to shoot a selected object in a close distance, in the process that the lens approaches the selected object from a far distance, the closer the shooting distance between the lens and the selected object is to the closest focusing distance of the lens, the better the selected object is presented in a shot image, until after the shooting distance is smaller than the closest focusing distance of the lens, the selected object cannot be clearly presented in the shot image.
In addition to the technical scenarios related to anti-counterfeiting verification of products in the foregoing examples, in other technical scenarios such as industrial flaw detection, commodity quality detection, artwork copyright registration, etc., a user may be required to adjust the pose of a certain target lens configured by a camera of the terminal device by adjusting the pose of the terminal device, so as to closely photograph a related selected object by using the target lens.
An embodiment of the specification provides an image shooting method and device. When a first object (i.e., a selected object) needs to be shot by using a target lens in a terminal device, dynamically configuring a first scaling (i.e., a digital scaling) corresponding to the first object based on a first image size of the first object which is expected and a second image size of an image of the first object shot by the target lens at a first shooting distance, wherein the first shooting distance is within a depth of field range of the target lens; after a first object is shot through a target lens to obtain a corresponding shot image, the shot image is zoomed and displayed according to a first dynamically configured zoom scale. Therefore, the method is more beneficial to guiding the user to adjust the pose of the target lens according to the image display effect of the first object, and the target lens shoots the first object near the first shooting distance, so that a clearer image of the first object is shot.
Fig. 1 is a flowchart of an image capturing method provided in an embodiment of the present disclosure. The method may be performed by a terminal device, which may be, for example, a portable mobile device such as a cell phone, tablet computer, smart watch, or smart glasses. The terminal device is provided with a camera, and the camera can be provided with one or more lenses.
Referring to fig. 1, the method may include, but is not limited to, the following steps S101 to S109.
In step S101, a first scaling ratio is obtained, where the first scaling ratio is a first ratio of a first image size of a desired first object to a second image size of an image of the first object captured with the target lens at a first capturing distance, and the first capturing distance is within a depth of field of the target lens.
The target lens is one of the individual lenses of the terminal device configuration.
The execution of the method may depend on an application in the terminal device that allows the use of the target lens, provided by the corresponding service provider and intended for implementing certain predetermined transactions (hereinafter this application will be denoted app_a). When the APP_A is initialized, the lens model of one or more lenses of the camera configured by the terminal equipment can be obtained and stored through remote procedure call, system call to the operating system of the terminal equipment or other possible data query modes.
For the first object to be shot, the first object can be a selected object which needs to be shot in a short distance in technical scenes such as anti-counterfeiting verification, industrial flaw detection, commodity quality detection, artwork copyright registration and the like of the product. Taking the technical scenario of anti-counterfeiting verification of a product as an example, the type of the first object to be photographed can include, but is not limited to, various possible marks such as a two-dimensional code, a bar code, a laser label or a product logo, which are arranged on the outer package of the product.
The image size may be a pixel size or a physical size. Since the outline of the first object to be photographed may be generally approximately square or rectangular in appearance, the image size of a certain image may be represented by the number of pixels or the physical length of the image corresponding to the longer side or the shorter side. For example, for an image with a longer-side pixel count or physical length of l and a shorter-side pixel count or physical length of w, the image size may be represented by l or w.
Referring to fig. 2, in a possible implementation manner, the obtaining the first scaling k corresponding to the first object may be implemented by some or all of the following steps S1011 to 1017 1
In step S1011, lens selection information indicating a target lens to be used and object selection information indicating a first object to be photographed are acquired.
For example, app_a may provide a corresponding graphical user interface to the user, through which the user may provide lens selection information and object selection information to the terminal device; the object selection information may include, for example, but is not limited to, a type to which the first object belongs, and the lens selection information may include, for example, but is not limited to, one of the respective lens models stored by app_a.
In step S1013, a second ratio is obtained according to the lens selection information, where the second ratio is a ratio of an image size of the second object captured by the target lens at the first capturing distance to a physical size of the second object.
The second object may be the same or different from the first object.
The first photographing distance is not less than a closest focusing distance of the target lens and is not greater than a farthest focusing distance of the target lens. In the process of photographing a selected object using a target lens at a short distance, the first photographing distance may be generally set to a nearest focusing distance of the target lens or to some preset value approaching the nearest focusing distance.
The accuracy of the target lens when the first object/second object is photographed at the first photographing distance by the target lens may be calibrated in advance, and the accuracy refers to the size of the physical area that each pixel point can represent at the first photographing distance.
Referring to FIG. 3, the physical length of the longer side of the second object is denoted as x h1 The physical length of the shorter side is denoted as x w1 The method comprises the steps of carrying out a first treatment on the surface of the The physical length of the longer side corresponding to the visual field range of the target lens under the first shooting distance is recorded as y h1 The physical length of the shorter side is denoted as y w1 . In the case of photographing a second object at a first photographing distance using a target lens, the number of pixels on the longer side of the image (hereinafter referred to as a focused image) correspondingly obtained is noted as y h2 The pixel point number of the pixel points of the shorter side is marked as y w2 The method comprises the steps of carrying out a first treatment on the surface of the The number of pixels on the longer side of the image/image area of the second object in the focused image is denoted as x h2 The number of pixels of the pixel points of the shorter side is marked as x w2
Under the condition that the target lens shoots a second object at a first shooting distance, the closer the center of the second object is to the center of the visual field range, the closer the center of the focusing image is to the center of the image of the second object, and the better the imaging effect of the second object is; in an ideal case, the center of the second object coincides with the center of the field of view, the center of the focusing image coincides with the center of the image of the second object, and the second object can obtain better imaging effect. In case of better imaging effect of the second object, y h1 And y is h2 The ratio of (2) is the precision of the target lens at the first shooting distance, and the second ratio P is the reciprocal of the precision, which is equal to y h2 And y is h1 Is a ratio of (2); and y is h1 And y is h2 Is the same as y w1 And y is w2 Ratio of x h1 And x h2 Ratio of x w1 And x w2 Is a ratio of (2); thus, in order to obtain the second ratio P by prior experimental measurements or standard target verification, only y need be obtained in practice h1 And y is h2 、y w1 And y is w2 、x h1 And x h2 、x w1 And x w2 The second ratio P can be calculated for any one of the four sets of data.
Illustratively, as shown in the process of fig. 2, app_a may obtain, for example, a lens model of the target lens according to the lens selection information, and query, locally or through a remote procedure call, a second ratio P stored in association with the lens model of the target lens.
In step S1015, the physical size and the first image size of the first object are acquired according to the object selection information.
The desired first image size may be preset based on technical needs. For example, the physical length of the longer side of the first object is denoted as x h3 The physical length of the shorter side is denoted as x w3 The method comprises the steps of carrying out a first treatment on the surface of the The number of pixels that can be set to the desired longer side is x h4 The number of pixels of the shorter side is x w4 The method comprises the steps of carrying out a first treatment on the surface of the Wherein x is satisfied h3 And x h4 Is equal to x w3 And x w4 Is a ratio of (2).
For example, app_a may obtain the type of the first object according to the object selection information, and further query the physical size and the first image size of the first object stored in association with the type of the first object locally or through a remote procedure call.
In step S1017, a first scaling is calculated based on the second ratio, the physical size of the first object, and the first image size.
First scaling k 1 Can be x h4 /(x h3 * P) or x w4 /(x w3 * P). As can be appreciated with reference to fig. 3 and the associated text description hereinbefore corresponding to fig. 3, the second image size is equal to x h3 * P or x w3 *P。
In one possible implementation manner, at least one scaling of the target lens corresponding to at least one preset object is pre-stored in the terminal device, and the at least one preset object includes the first object. Accordingly, after determining that the selected object to be photographed is the first object, the first scale k corresponding to the first object may be queried from the at least one scale 1
In the foregoing various embodiments, the first object may be determined not only based on the object selection information provided by the user, but also after at least one captured image of the first object is acquired by using the target lens, the captured image may be identified by a corresponding image identification algorithm, so as to determine that the selected object to be captured is the first object.
The first scaling k is obtained as described in the foregoing various embodiments 1 Is exemplary, it will be appreciated that the first scaling k may also be obtained by other means 1 . For example, referring to fig. 3 and the related text description corresponding to fig. 3, the number x of pixels on the longer side of the first object can be obtained through similar experimental measurement or standard target verification under the condition of obtaining relatively good imaging effect h3 * P or the number of shorter side pixels x w3 * P. For another example, app_a may also determine, in a specific manner, a target lens to be used at the time of initialization; when the type of the first object is a two-dimensional code, the APP_A can also search the physical size of the first object by extracting the code value of the first object and correspondingly searching the corresponding storage position; further, the first scaling k which is calculated in advance and is stored in association with the lens model of the target lens and the physical size of the first object can be queried from the corresponding storage position based on the lens model of the target lens and the physical size of the first object 1
Returning to fig. 1, in step S103, a first subject is photographed with a target lens, and a photographed image is obtained.
The field of view of the target lens at different shooting distances is different. Referring to fig. 4, when the first subject is photographed by the target lens in step S103, the physical length of the longer side of the field of view is denoted as y h3 The physical length of the shorter side is denoted as y w3 . The number of pixels on the longer side of the image/image area of the first object in the corresponding obtained photographed image is noted as x h5 The number of pixels at the shorter side is denoted as x w5 The image size of the original photographed image obtained by the target image is unchanged.
In the actual technical scene, not only a single shot image of the first object can be shot, but also a plurality of shot images of the first object can be shot continuously in the form of videoLike an image. For example, app_a may call a camera of the terminal device, so that it continuously captures a plurality of captured images of the first object in the form of video using the target lens and returns the captured images correspondingly. It can be understood that any i-th photographed image Q obtained by photographing the first subject is realized by step S103 1i The subsequent step S205 may be performed.
Step S105, zoom-displaying the captured image based on the first zoom scale.
For any ith photographed image Q 1i Referring to fig. 6, the first scaling k may be achieved by the following steps S1051 and S1053 1 For the ith photographed image Q 1i And performing zoom display.
In step S1051, a preview image is acquired, wherein the image size of the preview image is the same as the image size of the captured image, and the preview image is obtained by scaling the first image area in the captured image according to the first scaling ratio.
I.e. arbitrary i Zhang Yulan image Q 2i Is according to a first scaling rate k 1 For the ith photographed image Q 1i Obtained by digital zooming, in which a first image area and an i-th photographed image Q 1i With the same center. Referring to FIG. 6, the first scaling k may be applied 1 From the photographed image Q 1i Cut out and shoot image Q 1i A first image area having the same center and then a second image area is enlarged by k 1 Doubling to obtain preview image Q 2i The method comprises the steps of carrying out a first treatment on the surface of the For another example, the photographed image Q 1i Amplifying k 1 Doubling and then re-amplifying the photographed image Q 1i In which the photographed image Q after clipping and enlargement is cut out 1i Preview image Q with identical center 2i
When shooting image Q 1i An image M of a first object is present in a first image region of (1) 1i In the case of (a), the image M 1i The number of pixel points on the longer side of (2) is denoted as x h5 The number of pixels at the shorter side is denoted as x w5 . Then, the image Q is photographed 1i Corresponding preview image Q 2i In which there will be an image M of the first object 2i The image M 2i The number of pixels on the longer side of (2) is denoted as x h6 Number of pixels of shorter side x w6 Satisfy x h6 =k 1 *x h5 、x w6 =k 1 *x w5
In step S1053, the preview image is displayed through the preview interface, wherein the preview interface includes a first guiding area marked by marking information for displaying a second image area of the preview image after being scaled according to a second scaling, and an image size of the second image area is identical to the first image size.
For any ith preview image Q 2i A second image region in (a) which can be used with the preview image Q 2i With the same or a closer center. Referring to the foregoing, for an image of a first object in an ideal state that is expected to be obtained, the number of pixels on its longer side is x h4 The number of pixels of the shorter side is x w4 . Correspondingly, for the preview image Q 2i The number of the pixels of the longer side of the second image area is x h4 The number of pixels of the shorter side is x w4
The number of the pixels on the longer side of the first guiding area is recorded as x h7 The number of pixels at the shorter side is denoted as x w7 . Needs to satisfy x h4 And x w4 Is equal to x h7 And w is equal to y7 Is a ratio of (2). In a possible embodiment, the size of the first guiding area may be based on the screen size of the terminal device or based on other criteria, in satisfying x h4 And x w4 Is equal to x h7 And w is equal to y7 Is preset in the case of the ratio of (2) and satisfies x h7 And x h4 Ratio of x w7 And x w4 The ratio of (2) is less than 1; in this case, the second scaling k 2 The value of (2) is equal to x h7 And x h4 Ratio of (2) and x w7 And x w4 Is a ratio of (2). In one possible implementation, the second scaling k 2 Can be preset to be less than 1 Constant, the size of the first guiding region is based on k 2 Configured as follows.
Referring to fig. 7, for example, mark information may be provided on at least 3 vertices of the first guide area to indicate the position of the first guide area. The form of the marking information or the marking method is not limited in the embodiment of the present specification.
Referring to fig. 7, a user may observe a first guide region and a preview image Q in the first guide region 2i Image M of the first object 2i Wherein image M 2i The smaller the average distance between the 4 vertices of the first guiding area and the 4 vertices of the first guiding area, the higher the matching degree; image M 2i The higher the matching degree, the description image M 2i The more the number of pixels on the longer side of (a) approaches x h4 The more the number of shorter-side pixels approaches x w4 The method is more in line with the expectations of users; at the same time, a photographed image Q obtained by photographing the first object with the target lens is described 1i, The closer the actual photographing distance approaches the first photographing distance, the more the first subject is photographing the image Q 1i And preview image Q 2i Is clearly displayed in which the image Q is previewed in terms of the size of the image of the first object 2i Image M of (3) 2i Would be more user-friendly. Accordingly, the user can adjust the matching degree of the first guiding area and the observed image of the first object by adjusting the pose of the terminal device, and the user can be guided to approach the shooting distance corresponding to the target lens to the first shooting distance in the pose adjustment process, so that a clearer image is shot for the first object.
Referring to fig. 7, the preview interface may further include a second guiding area, where the first guiding area is located, and the second guiding area is used for displaying the preview image according to the second scaling k 2 And a third image area after scaling, the second image area being located in the third image area. In a more specific example, the rest of the third image area except the second image area may be in a "back" shape; in the second guiding region other than the first guiding regionThe remainder may be in the shape of a "Chinese character 'hui'. In a more specific example, the third image area and the second image area have the same center; the second guide region and the first guide region have the same center; in this case, the number of pixels on the longer side of the third image area is x h4 +2*l x1 The number of pixels of the shorter side is x w4 +2*l x1 The number of pixels on the longer side of the second guiding region is x h7 +2*l x2 The number of pixels of the shorter side is x w7 +2*l x2 And satisfy l x2 And/l x1 Is equal to the second scaling k 2 It is easier for the user to observe the image M 2i A degree of matching with the first guide region.
When the preview interface includes the first guide area and does not include the second guide area, the preview image Q 2i The rest of the image except for the second image area may be obscured by various possible ways and not presented in the preview interface. When the preview interface includes the first guide area and the second guide area, the preview image Q 2i The rest of the image except for the third image area may be obscured by various possible ways and not presented in the preview interface. Thus, by excluding the preview image Q 2i The part except the second image area or the third image area causes visual interference to the user, so that the user can more easily observe the matching degree of the image of the first object and the first guiding area, and correspondingly adjust the pose of the mobile phone.
On the basis of implementing zoom display of the photographed image according to the first zoom ratio through the foregoing steps S1051 and S1053, as shown in fig. 8, the terminal device may further implement selection of an image for implementing a predetermined transaction and assist the user in adjusting the pose of the target lens by part or all of the following steps S107 to S1013.
Step S107, a fourth image area corresponding to the first object is detected in the preview image.
The first image can be detected in the preview image by various image detection models including YOLO A fourth image region corresponding to an object, e.g. in the preview image Q 2i Middle detection image M 2i And obtaining a fourth image area according to the corresponding outline of the approximate rectangle.
Step S109, determining whether the relationship between the fourth image area and the second image area meets the preset condition.
The preset condition is used for judging whether the fourth image area is matched with the second image area, and accordingly can be used for deciding whether the fourth image area sufficiently clearly presents the first object, including but not limited to deciding whether the actual shooting distance corresponding to the shooting image is matched with the first shooting distance, for example, whether the difference value is within a certain range. The longer side and the shorter side of the first object are equal, i.e. the outline of the first object is square or approximately square, and the preset condition requires that the number of pixels of the shorter side of the fourth image area is not less than a1×x w4 A1 is a preset constant less than 1; if the number of pixels of the shortest side of the fourth image area is smaller than a1 x w4 And the fourth image area is not matched with the second image area, namely the relation between the fourth image area and the second image area does not accord with the preset condition. Obviously, the preset conditions may also include other possible conditions, such as requiring that the number of pixels at the longest edge of the second image area is not greater than a2 x w4 A2 is a preset constant greater than 1.
If the relation between the fourth image area and the second image area meets the corresponding preset condition, the following step S111 may be performed next; otherwise, the following step S113 may be performed next.
Step S111, the preview image is determined as an image for executing a predetermined transaction.
The predetermined transaction may be, for example, anti-counterfeit verification of the product, industrial defect detection, etc., as described above.
In some embodiments, the second image area or the third image area may also be determined as an image for performing a predetermined transaction if the relation between the fourth image area and the second image area meets a respective preset condition.
Step S113, generating pose prompt information according to the relation between the fourth image area and the second image area; the pose prompt information is displayed in the preview interface and is used for indicating a user to adjust the pose of the target lens.
For example, when the number of pixels on the longest side of the fourth image area is smaller than a certain first preset value, and the number of pixels on the shortest side of the fourth image area is smaller than a certain second preset value, the corresponding generated pose prompt information may prompt the user to approach the terminal device to the first object. For another example, when the center of the fourth image area is offset towards a certain direction relative to the center of the preview image and the offset distance is too large, the correspondingly generated pose prompt information may prompt the user to move the terminal device towards the corresponding direction.
Based on the same concept as the foregoing method embodiments, an image capturing apparatus 900 is further provided in the present embodiment, and is disposed in a terminal device configured with a target lens. Referring to fig. 9, the apparatus 900 includes: a scale determination unit 901 configured to obtain a first scale, where the first scale is a first ratio of a first image size of a desired first object to a second image size, the second image size being an image size of an image of the first object captured with the target lens at a first capturing distance, the first capturing distance being within a depth of field of the target lens; an image acquisition unit 903 configured to capture the first object with the target lens, and obtain a captured image; and a zoom display unit 905 configured to perform zoom display on the captured image based on the first zoom scale.
There is also provided in the embodiments of the present specification a computer-readable storage medium having stored thereon a computer program/instruction which, when executed in a terminal device, causes the terminal device to perform an image capturing method provided in any of the method embodiments described above.
The embodiment of the specification also provides a terminal device, which comprises a target lens, a memory and a processor, wherein the memory stores a computer program/instruction, and the processor realizes the image shooting method provided in any one of the method embodiments when executing the computer program/instruction.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation device is a server system. Of course, the application does not exclude that as future computer technology advances, the computer implementing the functions of the above-described embodiments may be, for example, a personal computer, a laptop computer, a car-mounted human-computer interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Although one or more embodiments of the present description provide method operational steps as described in the embodiments or flowcharts, more or fewer operational steps may be included based on conventional or non-inventive means. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in an actual device or end product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment, or even in a distributed data processing environment) as illustrated by the embodiments or by the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, it is not excluded that additional identical or equivalent elements may be present in a process, method, article, or apparatus that comprises a described element. For example, if first, second, etc. words are used to indicate a name, but not any particular order.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, when one or more of the present description is implemented, the functions of each module may be implemented in the same piece or pieces of software and/or hardware, or a module that implements the same function may be implemented by a plurality of sub-modules or a combination of sub-units, or the like. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage, graphene storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
One skilled in the relevant art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, one or more embodiments of the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
One or more embodiments of the present specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present description may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments. In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The foregoing is merely an example of one or more embodiments of the present specification and is not intended to limit the one or more embodiments of the present specification. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of the present specification, should be included in the scope of the claims.

Claims (16)

1. An image capturing method performed by a terminal device configured with a target lens, the method comprising:
acquiring a first scaling, wherein the first scaling is a first ratio of a first image size of a desired first object to a second image size, the second image size being an image size of an image of the first object taken with the target lens at a first shooting distance, the first shooting distance being within a depth of field of the target lens;
shooting the first object by using the target lens to obtain a shooting image;
and scaling and displaying the shooting image based on the first scaling scale.
2. The method of claim 1, the first shooting distance comprising a closest focus distance of the target lens.
3. The method of claim 1, the scaling displaying the captured image based on the first scaling, comprising:
acquiring a preview image, wherein the image size of the preview image is the same as the image size of the shooting image, and the preview image is obtained by scaling a first image area in the shooting image according to the first scaling scale;
and displaying the preview image through a preview interface, wherein the preview interface comprises a first guide area marked by marking information and used for displaying a second image area which is zoomed according to a second zoom scale and has the same image size as the first image size in the preview image.
4. A method according to claim 3, comprising a second guide area in the preview interface, the first guide area being located in the second guide area, the second guide area being for displaying a third image area of the preview image after scaling by the second scaling, the second image area being located in the third image area.
5. The method of claim 4, the remainder of the third image region other than the second image region being in a "back" word shape; the rest of the second guiding area except the first guiding area is in a shape of a Chinese character 'hui'.
6. The method of claim 5, the third image region and the second image region having a same center; the second guide region and the first guide region have the same center.
7. A method according to claim 3, the first image area and the captured image having the same center; the second image area and the preview image have the same center.
8. A method according to claim 3, the method further comprising:
detecting a fourth image area corresponding to the first object in the preview image;
determining whether the relation between the fourth image area and the second image area meets a preset condition or not;
if not, generating pose prompt information according to the relation between the fourth image area and the second image area; the pose prompt information is displayed in the preview interface and is used for indicating a user to adjust the pose of the target lens.
9. The method of claim 8, the method further comprising: and determining the preview image as an image for executing a predetermined transaction in the case that the relation between the fourth image area and the second image area meets a preset condition.
10. The method of claim 1, the obtaining a first scale comprising:
acquiring lens selection information and object selection information, wherein the lens selection information is used for indicating the target lens, and the object selection information is used for indicating the first object;
acquiring a second ratio according to the lens selection information, wherein the second ratio is the ratio of the image size of the second object shot by the target lens at the first shooting distance to the physical size of the second object; the method comprises the steps of,
acquiring the physical size of the first object and the first image size according to the object selection information;
the first scale is calculated based on the second ratio, the physical size of the first object, and the first image size.
11. The method according to claim 1, wherein at least one scaling of the target lens corresponding to at least one preset object is pre-stored in the terminal device, and the at least one preset object includes the first object;
wherein the obtaining the first scaling includes: and querying a first scaling corresponding to the first object from the at least one scaling.
12. The method of claim 1, the type of the first object comprising a two-dimensional code, a bar code, a laser label, or a product logo disposed on an outer packaging of the product.
13. The method of any of claims 3-12, the second scaling being a constant less than 1.
14. An image capturing apparatus deployed in a terminal device configured with a target lens, the apparatus comprising:
a scale determination unit configured to acquire a first scale, wherein the first scale is a first ratio of a first image size of a desired first object to a second image size, the second image size being an image size of an image of the first object photographed with the target lens at a first photographing distance, the first photographing distance being within a depth of field range of the target lens;
an image acquisition unit configured to acquire a captured image by capturing the first object with the target lens;
and a zoom display unit configured to perform zoom display on the photographed image based on the first zoom scale.
15. A computer readable storage medium having stored thereon a computer program which, when executed in a terminal device, performs the method of any of claims 1-13.
16. A terminal device comprising a target lens, a memory, and a processor, the memory having executable code stored therein, which when executed by the processor, implements the method of any of claims 1-13.
CN202311016196.9A 2023-08-11 2023-08-11 Image shooting method and device Pending CN117082344A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311016196.9A CN117082344A (en) 2023-08-11 2023-08-11 Image shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311016196.9A CN117082344A (en) 2023-08-11 2023-08-11 Image shooting method and device

Publications (1)

Publication Number Publication Date
CN117082344A true CN117082344A (en) 2023-11-17

Family

ID=88701592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311016196.9A Pending CN117082344A (en) 2023-08-11 2023-08-11 Image shooting method and device

Country Status (1)

Country Link
CN (1) CN117082344A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5331419A (en) * 1991-03-26 1994-07-19 Kyocera Corporation Size display system for electronic camera
JPH11112966A (en) * 1997-10-07 1999-04-23 Canon Inc Moving object detector, moving object detection method and computer-readable storage medium thereof
JP2004212892A (en) * 2003-01-08 2004-07-29 Canon Inc Photographing device
US20050074185A1 (en) * 2003-10-07 2005-04-07 Jee-Young Jung Apparatus and method for controlling an auto-zooming operation of a mobile terminal
CN109495681A (en) * 2017-09-12 2019-03-19 天津三星通信技术研究有限公司 The method and apparatus for obtaining image
CN112969023A (en) * 2021-01-29 2021-06-15 北京骑胜科技有限公司 Image capturing method, apparatus, storage medium, and computer program product
CN114390213A (en) * 2020-10-22 2022-04-22 华为技术有限公司 Shooting method and equipment
CN114820296A (en) * 2021-01-27 2022-07-29 北京小米移动软件有限公司 Image processing method and device, electronic device and storage medium
CN114897980A (en) * 2022-05-07 2022-08-12 北京市商汤科技开发有限公司 Distance determination method and related device, equipment and storage medium
WO2023274097A1 (en) * 2021-06-28 2023-01-05 歌尔股份有限公司 Qr code image processing method and apparatus
US20230154148A1 (en) * 2021-11-18 2023-05-18 Canon Kabushiki Kaisha Image capturing apparatus capable of suppressing detection of subject not intended by user, control method for image capturing apparatus, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5331419A (en) * 1991-03-26 1994-07-19 Kyocera Corporation Size display system for electronic camera
JPH11112966A (en) * 1997-10-07 1999-04-23 Canon Inc Moving object detector, moving object detection method and computer-readable storage medium thereof
JP2004212892A (en) * 2003-01-08 2004-07-29 Canon Inc Photographing device
US20050074185A1 (en) * 2003-10-07 2005-04-07 Jee-Young Jung Apparatus and method for controlling an auto-zooming operation of a mobile terminal
CN109495681A (en) * 2017-09-12 2019-03-19 天津三星通信技术研究有限公司 The method and apparatus for obtaining image
CN114390213A (en) * 2020-10-22 2022-04-22 华为技术有限公司 Shooting method and equipment
CN114820296A (en) * 2021-01-27 2022-07-29 北京小米移动软件有限公司 Image processing method and device, electronic device and storage medium
CN112969023A (en) * 2021-01-29 2021-06-15 北京骑胜科技有限公司 Image capturing method, apparatus, storage medium, and computer program product
WO2023274097A1 (en) * 2021-06-28 2023-01-05 歌尔股份有限公司 Qr code image processing method and apparatus
US20230154148A1 (en) * 2021-11-18 2023-05-18 Canon Kabushiki Kaisha Image capturing apparatus capable of suppressing detection of subject not intended by user, control method for image capturing apparatus, and storage medium
CN114897980A (en) * 2022-05-07 2022-08-12 北京市商汤科技开发有限公司 Distance determination method and related device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US9998651B2 (en) Image processing apparatus and image processing method
TWI677252B (en) Vehicle damage image acquisition method, device, server and terminal device
KR102480245B1 (en) Automated generation of panning shots
US10511758B2 (en) Image capturing apparatus with autofocus and method of operating the same
CN109614983B (en) Training data generation method, device and system
US7606442B2 (en) Image processing method and apparatus
JP5054063B2 (en) Electronic camera, image processing apparatus, and image processing method
US20140211045A1 (en) Image processing apparatus and image pickup apparatus
KR102209066B1 (en) Method and apparatus for image composition using multiple focal length
EP3591573A1 (en) Method and apparatus for use in previewing during iris recognition process
CN107071273A (en) A kind of photographing instruction sending method and device
CN108169996A (en) A kind of test method of stereo camera shooting module motor characteristics, apparatus and system
EP3218756B1 (en) Direction aware autofocus
US20150015771A1 (en) Image-capturing devices and methods
CN105847658A (en) Multipoint focus method, device and intelligent terminal
JP6645711B2 (en) Image processing apparatus, image processing method, and program
CN117082344A (en) Image shooting method and device
CN118044215A (en) Macro shooting method, electronic equipment and computer readable storage medium
WO2022004302A1 (en) Image processing device, imaging device, image processing method, and program
JP6623419B2 (en) Display control device, imaging device, smartphone, display control method, and program
US20160198084A1 (en) Image pickup apparatus, operation support method, and medium recording operation support program
CN116233605B (en) Focusing implementation method and device, storage medium and image pickup equipment
CN117875339A (en) Anti-fake code verification method and device
KR20240067963A (en) User interface for camera focus
KR20240010685A (en) Electronic device, method, and non-transitory computer readable storage medium for user interface for photographing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination