CN113411477B - Image acquisition method, device and equipment - Google Patents

Image acquisition method, device and equipment Download PDF

Info

Publication number
CN113411477B
CN113411477B CN202110649617.6A CN202110649617A CN113411477B CN 113411477 B CN113411477 B CN 113411477B CN 202110649617 A CN202110649617 A CN 202110649617A CN 113411477 B CN113411477 B CN 113411477B
Authority
CN
China
Prior art keywords
target object
image
sub
preview image
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110649617.6A
Other languages
Chinese (zh)
Other versions
CN113411477A (en
Inventor
颜林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202110649617.6A priority Critical patent/CN113411477B/en
Publication of CN113411477A publication Critical patent/CN113411477A/en
Application granted granted Critical
Publication of CN113411477B publication Critical patent/CN113411477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the specification discloses an image acquisition method, an image acquisition device and image acquisition equipment, wherein the method is applied to terminal equipment and comprises the following steps: in the process of acquiring an image of a target object through a camera shooting assembly, if it is detected that a preview image of the target object is not in a preset image acquisition frame, acquiring a relative position of the preview image of the target object and the image acquisition frame, sequentially executing each sub-strategy in preset adjustment strategies based on the relative position of the preview image of the target object and the image acquisition frame, reminding a user to move the terminal equipment in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame, enabling the preview image of the target object to be in the image acquisition frame, and acquiring the image of the target object and outputting the image of the target object when the preview image of the target object is in the image acquisition frame.

Description

Image acquisition method, device and equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for acquiring an image.
Background
With the popularization of mobile terminals, more and more users make purchases, account transfers, financing and the like through mobile terminal devices. Before enjoying convenient internet services, users usually need to open corresponding internet services by verifying identities through terminal equipment, wherein the internet services can be services such as online account opening, account real-name authentication and the like. The process of verifying the identity requires the user to use the terminal device to take an image of a document whose identity can be verified and transmit it to the server to verify the correctness of the user's identity.
However, when a user (such as a blind person) with visual impairment performs identity authentication in the above manner, due to the absence of visual feedback, it cannot be determined whether the terminal device is aligned with a corresponding certificate for shooting, so that the acquisition success rate of the certificate image is low, and difficulty in acquiring the certificate image is brought to the user.
Disclosure of Invention
It is an object of embodiments of the present specification to provide a barrier-free interaction way for a vision-impaired user to scan a document.
In order to implement the above technical solution, the embodiments of the present specification are implemented as follows:
an image acquisition method provided in an embodiment of the present specification is applied to a terminal device, and the method includes: in the process of acquiring an image of a target object through a camera shooting assembly, if the fact that a preview image of the target object is not in a preset image acquisition frame is detected, the relative position of the preview image of the target object and the image acquisition frame is acquired. And sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame. And when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object.
An embodiment of this specification provides a collection system of image, the device includes: and the direction acquisition module is used for acquiring the relative direction of the preview image of the target object and the image acquisition frame if the preview image of the target object is detected not to be in the preset image acquisition frame in the process of acquiring the image of the target object through the camera shooting assembly. And the adjusting module is used for sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the device in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame. And the image acquisition module is used for acquiring the image of the target object and outputting the image of the target object when the preview image of the target object is in the image acquisition frame.
An embodiment of the present specification provides an image acquisition apparatus, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to: in the process of acquiring the image of the target object through the camera shooting assembly, if the fact that the preview image of the target object is not in the preset image acquisition frame is detected, the relative position of the preview image of the target object and the image acquisition frame is acquired. And sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the image acquisition equipment in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame. And when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object.
The present specification also provides a storage medium, wherein the storage medium is used for storing computer executable instructions, and the executable instructions implement the following processes when executed: in the process of acquiring the image of the target object through the camera shooting assembly, if the fact that the preview image of the target object is not in the preset image acquisition frame is detected, the relative position of the preview image of the target object and the image acquisition frame is acquired. And sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user of moving a terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame so that the preview image of the target object is in the image acquisition frame. And when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
FIG. 1 is a diagram illustrating an embodiment of a method for capturing an image;
FIG. 2 is a diagram illustrating another embodiment of a method for capturing an image;
FIG. 3 is a schematic diagram illustrating an adjustment process of an image capturing orientation according to the present disclosure;
FIG. 4A is a schematic view of another image capturing orientation adjustment process in accordance with the present disclosure;
FIG. 4B is a schematic diagram illustrating another process for adjusting the image capturing orientation;
FIG. 4C is a schematic diagram illustrating another image capturing orientation adjustment process according to the present disclosure;
FIG. 5 illustrates an embodiment of an image capture device according to the present disclosure;
fig. 6 is an embodiment of an image capturing device according to the present disclosure.
Detailed Description
The embodiment of the specification provides an image acquisition method, device and equipment.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Example one
As shown in fig. 1, an embodiment of the present specification provides an image acquisition method, where an execution subject of the method may be a terminal device, where the terminal device may be a mobile terminal device such as a mobile phone and a tablet computer. The method may specifically comprise the steps of:
in step S102, in the process of capturing an image of the target object by the camera assembly, if it is detected that the preview image of the target object is not within the preset image capturing frame, the relative orientation between the preview image of the target object and the image capturing frame is obtained.
The camera shooting assembly can be an assembly used for image acquisition in terminal equipment, such as a mobile phone camera or a camera. The target object may include multiple types, for example, a certain entity file, a certain certificate of the user (specifically, a bank card, a social security card, a property right certificate, and the like), or may also be a certain part of the user (if a head portrait of the user needs to be captured, the target object may be a head of the user, and the like), and the target object may be specifically set according to an actual situation, which is not limited in this embodiment of the specification. The preview image can be an image which is acquired by the camera shooting assembly before the camera shooting assembly shoots an image or a video and is displayed in a camera shooting preview interface, and in an image preview stage, a user can adjust the position of an object to be shot or adjust the position of the camera shooting assembly to enable the state of the object to be shot to be in line with the expectation of the user so as to shoot a corresponding image. The image acquisition frame can be preset in a designated area in the camera preview interface, an inner area of the image acquisition frame can be used for limiting the size of an object to be shot, an outer area of the image acquisition frame can be subjected to blurring processing, or shielding processing can be performed, so that an image in the inner area of the image acquisition frame is more prominent, and the image acquisition frame can be specifically set according to actual conditions, and the image acquisition frame is not limited in the embodiment of the specification.
In implementation, with the popularization of mobile terminals, more and more users make purchases, account transfers, financing, and the like through mobile terminal devices. Before enjoying convenient internet services, users usually need to open corresponding internet services by verifying identities through terminal equipment, wherein the internet services can be services such as online account opening, account real-name authentication and the like. The process of verifying the identity requires the user to use the terminal device to take an image of a document whose identity can be verified and transmit it to the server to verify the correctness of the user's identity. However, when a user (such as a blind person) with visual impairment performs identity authentication in the above manner, due to the absence of visual feedback, it cannot be determined whether the terminal device is aligned with a corresponding certificate for shooting, so that the acquisition success rate of the certificate image is low, and difficulty in acquiring the certificate image is brought to the user. The embodiment of the present specification provides a barrier-free interaction manner, which may specifically include the following:
currently, many user services may be handled through a network, for example, an online account opening service, an actual name authentication service, a login service, and the like, and the user identity needs to be verified when handling the user services through the network, for example, the login service needs to verify the identity of a currently logged-in user, specifically, the user may perform identity verification through a face recognition method, and for example, the online account opening service and the actual name authentication service need to verify the identity of a currently logged-in user or an actual name authenticated user, specifically, the user may take an image of a certificate that can prove the identity of the user and send the image to a server for verification, and for example, a binding service, and the user needs to take an image of a card to be bound (such as a bank card or a social security card) to bind an account. In the processing process of the user service, an image of an object (i.e., a target image, such as a user or a certificate of the user) often needs to be acquired, at this time, the terminal device may start the camera shooting assembly, and may acquire the image of the target object through the camera shooting assembly, and after the camera shooting assembly is started, the terminal device may display a camera shooting preview interface acquired by the camera shooting assembly in the display assembly, where an image acquisition frame may be provided in the camera shooting preview interface in order to make the acquired image meet requirements or improve effectiveness of image acquisition. In consideration of a low success rate of image acquisition for users with visual impairment (such as blind people), the terminal device detects whether a preview image of a target object is not in a preset image acquisition frame in the process of acquiring an image of the target object through the camera shooting assembly, and if the preview image of the target object is not in the preset image acquisition frame, the relative position between the preview image of the target object and the image acquisition frame can be acquired. In addition, the direction and the position of the preview image of the target object may be acquired by using the image capturing frame as a reference object or a reference object, or the orientation of the preview image of the target object may be determined by acquiring coordinates and the like of the preview image of the target object in the imaging preview interface.
In step S104, each of the preset adjustment policies is sequentially executed based on the relative orientation between the preview image of the target object and the image capturing frame, and the user is prompted to move the terminal device in a voice and/or vibration manner to adjust the orientation of the image capturing frame in the process of executing each of the sub-policies, so that the preview image of the target object is in the image capturing frame.
The vibration mode may be implemented in a variety of different ways, for example, different reminding modes may be set based on the duration of vibration, specifically, a short vibration may be used as a reminder for the mobile terminal device of the user, a long continuous vibration may be used as a reminder for the mobile terminal device of the user, and the like, which may be specifically set according to an actual situation, and this is not limited in the embodiment of the present specification.
In implementation, in order to better assist a user with visual impairment in image acquisition, an adjustment policy may be preset according to actual conditions, where the adjustment policy may include one or more different sub-policies, where, for the multiple different sub-policies, an execution order of the multiple different sub-policies may be preset, for example, the adjustment policy may include 3 different sub-policies, which are sub-policy 1, sub-policy 2, and sub-policy 3, respectively, and it may be preset to execute sub-policy 3 first, then execute sub-policy 1, and finally execute sub-policy 2, and so on. After the relative orientation between the preview image of the target object and the image acquisition frame is obtained through the processing, a preset adjustment strategy can be obtained, sub-strategies in the adjustment strategy can be extracted, and the execution sequence of each sub-strategy is determined. For a first sub-policy in the adjustment policy, the first sub-policy in the adjustment policy may be executed based on the relative position of the preview image of the target object and the image capture frame, and meanwhile, in the process of executing the first sub-policy, a user may be prompted to move the terminal device to adjust the position of the image capture frame based on the movement mode corresponding to the first sub-policy in a voice and/or vibration mode until the position of the adjusted image capture frame meets the movement condition corresponding to the first sub-policy. Then, a second sub-policy in the adjustment policy may be started, a relative orientation between a preview image of the target object and the image capture frame under the first sub-policy may be obtained, the second sub-policy in the adjustment policy may be executed, and meanwhile, in the process of executing the second sub-policy, the user may be prompted to move the terminal device based on the movement manner corresponding to the second sub-policy in a voice and/or vibration manner to adjust the orientation of the image capture frame until the orientation of the image capture frame after adjustment satisfies the movement condition corresponding to the second sub-policy, and then, the third sub-policy and the fourth sub-policy 8978 zxft May be continuously executed in the same manner as described above until the execution of each sub-policy in the adjustment policy is completed, and finally, the preview image of the target object may be located in the image capture frame.
For example, the adjustment policy may include 3 different sub-policies, which are sub-policy 1, sub-policy 2, and sub-policy 3, and it may be preset to execute sub-policy 3, then execute sub-policy 1, and finally execute sub-policy 2. The sub-strategy 3 in the adjustment strategy can be executed based on the relative position of the preview image of the target object and the image acquisition frame, and meanwhile, in the process of executing the sub-strategy 3, a user can be reminded to move the terminal device to adjust the position of the image acquisition frame based on the moving mode corresponding to the sub-strategy 3 in a temporary vibration mode until the adjusted position of the image acquisition frame meets the moving condition corresponding to the sub-strategy 3 strategy. Then, a sub-policy 1 in the adjustment policy may be started, the relative orientation between the preview image of the target object and the image capture frame under the sub-policy 1 may be obtained, the sub-policy 1 in the adjustment policy is executed, and meanwhile, in the process of executing the sub-policy 1, the user may be prompted to move the terminal device to adjust the orientation of the image capture frame based on the movement manner corresponding to the adjusted sub-policy 1 in a long-time vibration manner until the orientation of the image capture frame after adjustment meets the movement condition corresponding to the sub-policy 1. Finally, a sub-policy 2 in the adjustment policy may be started, the relative orientation of the preview image of the target object under the sub-policy 2 and the image acquisition frame may be acquired, and the sub-policy 2 in the adjustment policy may be executed, and meanwhile, in the process of executing the sub-policy 2, the user may be prompted to adjust the orientation of the image acquisition frame based on the mobile terminal device in the mobile mode corresponding to the adjusted sub-policy 2 in a manner of alternately vibrating for a long time and a short time until the orientation of the adjusted image acquisition frame meets the mobile condition corresponding to the sub-policy 2. Through the above processing, the preview image of the target object can be made to be within the image capture frame.
In step S106, when the preview image of the target object is within the image capturing frame, an image of the target object is captured and output.
In implementation, by adjusting the adjustment policy, the preview image of the target object may be adjusted in the image capture frame, and if it is detected that the preview image of the target object is adjusted in the image capture frame, the camera module may be started, and the image of the target object may be captured by the camera module, and then the image of the target object may be output to a corresponding service server, so that the service server may execute a specified service process based on the image of the target object, which may specifically be set according to an actual situation, and this is not limited in this specification.
The embodiment of the specification provides an image acquisition method, which is applied to a terminal device, and is characterized in that in the process of acquiring an image of a target object through a camera shooting assembly, if it is detected that a preview image of the target object is not in a preset image acquisition frame, the relative position of the preview image of the target object and the image acquisition frame is acquired, each sub-strategy in a preset adjustment strategy is sequentially executed based on the relative position of the preview image of the target object and the image acquisition frame, and a user is reminded to move the terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame, and when the preview image of the target object is in the image acquisition frame, the image of the target object is acquired, so that a proper shooting position is finally obtained by decomposing the process of shooting the target object (such as a certificate of the user and the like) into each adjustment sub-strategy and respectively adjusting the terminal device based on the adjustment sub-strategy, thereby providing a more convenient and rapid image acquisition mode for the user to conveniently scan the visual disorder of the user without visual disorder.
Example two
As shown in fig. 2, an execution main body of the method may be a terminal device, where the terminal device may be a mobile terminal device such as a mobile phone and a tablet computer. The method may specifically comprise the steps of:
in step S202, in the process of capturing an image of a target object by the camera assembly, it is detected whether an included angle between the target object and a designated direction in a preview image of the target object matches a first included angle, where the first included angle is an included angle between an image capturing frame and the designated direction.
The target object may be a certificate of the user, for example, the certificate may be a certificate capable of proving an identity of the user, or a certificate capable of proving that the user owns a certain account or card, and the target object may be specifically set according to an actual situation, which is not limited in this embodiment of the specification. The designated direction may be any direction, specifically, a due south direction, a due north direction, and the like, and may be set specifically according to an actual situation, which is not limited in this specification.
In implementation, the terminal device may be provided with a detection mechanism or a detection algorithm for detecting an included angle between the target object and the specified direction in the preview image of the target object, and detecting the first included angle, and in consideration of the fact that the orientation in which the target object is placed by the user may not be matched with the orientation of the image capturing frame, the detection mechanism or the detection algorithm may be used to detect the angle in the process of capturing the image of the target object by the image capturing component, and the orientations of the two may be adjusted so that the orientations of the two are matched. Specifically, the user may place a target object to be photographed on a plane (e.g., on a desktop of a desk or on the ground), and then the user may capture an image of the target object using a camera assembly of the terminal device. In the process that the terminal device acquires the image of the target object through the camera shooting assembly, the terminal device can start the detection mechanism or the detection algorithm, detect the included angle between the target object and the specified direction in the preview image of the target object through the detection mechanism or the detection algorithm, and simultaneously detect the included angle (namely, the first included angle) between the image acquisition frame and the specified direction. Then, an included angle between the target object and the designated direction in the preview image of the target object may be compared with the first included angle, if the two angles are not equal, it may be determined that the included angle between the target object and the designated direction in the preview image of the target object is not matched with the first included angle, and if the two angles are equal, it may be determined that the included angle between the target object and the designated direction in the preview image of the target object is matched with the first included angle.
In step S204, if not, the moving direction and the moving angle of the terminal device are determined based on the first angle and the angle between the target object and the designated direction in the preview image.
In implementation, if it is determined that the angle between the target object and the designated direction in the preview image of the target object does not match the first angle, the difference between the angle between the target object and the designated direction in the preview image and the first angle and the corresponding orientation adjustment manner may be determined by comparing the angle between the target object and the designated direction in the preview image and the first angle, for example, as shown in fig. 3, the angle between the target object and the designated direction in the preview image is 45 degrees, and the first angle is 0 degree, it may be determined that the camera assembly (or the terminal device) needs to be rotated by 45 degrees (clockwise direction) or 135 degrees (counterclockwise direction), so that the orientation adjustment manner, that is, the moving direction and the moving angle of the terminal device may be obtained.
In step S206, based on the moving direction and the moving angle of the terminal device, the user is prompted to move the terminal device in a voice and/or vibration manner to adjust the first included angle, so that the included angle between the target object in the preview image and the designated direction matches the adjusted first included angle.
In implementation, as shown in fig. 3, after the moving direction and the moving angle of the terminal device are obtained in the above manner, the user may be reminded to move the terminal device in a voice broadcast manner (e.g., "please rotate 45 degrees clockwise") to adjust the first included angle, at this time, the user may slowly rotate the terminal device, when the included angle between the target object in the preview image and the designated direction matches the adjusted first included angle, the user may be reminded to stop moving the terminal device in a voice broadcast manner (e.g., "stop"), or the user may be reminded to move the terminal device in a vibration manner (e.g., a vibration lasting for a long time (e.g., a vibration lasting for 3 seconds)) to adjust the first included angle, at this time, the user may slowly rotate the terminal device, when the included angle between the target object and the designated direction in the preview image is matched with the adjusted first included angle, the user may be reminded to stop the mobile terminal device in a vibration mode (for example, a short vibration), or may be reminded to stop the mobile terminal device in a voice broadcast mode (for example, "please rotate 45 degrees clockwise") to adjust the first included angle, at this time, the user may slowly rotate the terminal device, when the included angle between the target object and the designated direction in the preview image is matched with the adjusted first included angle, the user may be reminded to stop the mobile terminal device in a vibration mode (for example, a short vibration), and the like.
In step S208, if it is detected that the preview image of the target object is not within the preset image capture frame, the relative orientation of the preview image of the target object and the image capture frame is acquired.
In practical application, the adjustment strategies may include multiple types, the adjustment strategies may include an up-down adjustment sub-strategy, a left-right adjustment sub-strategy, and a distance adjustment sub-strategy, and any one of the adjustment strategies may include the following processing procedures, specifically, the processing from step S210 to step S216 as follows.
In step S210, the sub-policy is executed based on the relative position between the preview image of the target object and the image capturing frame, and the user is prompted by voice to move the terminal device in the direction indicated by the sub-policy to adjust the position of the image capturing frame.
In implementation, taking the up-down adjustment sub-policy in the adjustment policy as an example, the up-down adjustment sub-policy is executed based on the relative orientation between the preview image of the target object and the image acquisition frame, at this time, the terminal device may prompt the user to move the terminal device upward or downward in a text manner, and meanwhile, the screen reading application program in the terminal device may prompt the user to move the terminal device upward or downward in a voice broadcast manner.
In step S212, the user acquires the edge position of the target object in the preview image of the target object while moving the terminal device in the direction indicated by the sub-policy.
In implementation, an edge detection mechanism may be disposed in the terminal device, and the user moves the terminal device upward or downward according to the direction indicated by the up-down adjustment sub-policy, in this process, the terminal device may detect an edge position of the target object in the preview image of the target object through the edge detection mechanism, and obtain the edge position of the target object in the preview image of the target object, thereby determining whether the terminal device is located at a better shooting position.
The specific processing manner of step S212 may be various, and an alternative processing manner is provided below, and may specifically include the following processing of step A2 and step A4.
In step A2, in the process that the user moves the terminal device according to the direction indicated by the sub-policy, the position of the vertex of the target object in the preview image of the target object is positioned based on a preset corner positioning algorithm.
The corner point positioning algorithm may be used for positioning (or determining the orientation of) a vertex of an object to be photographed in the preview image, for example, if the object to be photographed is a rectangle, positions of 4 corner vertices of the rectangle may be positioned, and the corner point positioning algorithm may be implemented by various different algorithms, which may be specifically set according to actual situations, and this is not limited in the embodiments of the present specification.
In step A4, the edge position of the target object in the preview image of the target object is determined based on the position of the vertex of the target object in the preview image of the target object.
In step S214, it is determined whether the adjusted orientation of the image capturing frame satisfies the condition corresponding to the sub-policy based on the edge position of the target object in the preview image of the target object.
The conditions corresponding to the sub-strategies may include multiple conditions, for example, the target object is a rectangle, the image capturing frame is also a rectangle, the conditions corresponding to the corresponding sub-strategies may be that the long side of the image capturing frame is parallel to the long side of the target object, the short side of the image capturing frame is parallel to the short side of the target object, and the like, which may be specifically set according to an actual situation, and the embodiment of the present specification does not limit this.
In step S216, if yes, the user is prompted to stop the mobile terminal device by vibration, and the relative orientation between the preview image of the target object and the image capture frame is obtained to execute the next sub-policy in the adjustment policies.
To further explain the adjustment manner in the image acquisition process, for the case that the adjustment strategy includes an up-down adjustment sub-strategy, a left-right adjustment sub-strategy, and a distance adjustment sub-strategy, the processing of the steps S210 to S216 can be implemented by the following specific processes:
as shown in fig. 4A, if the execution sequence of the adjustment policy is the up-down adjustment sub-policy, the left-right adjustment sub-policy, and the distance adjustment sub-policy, the up-down adjustment sub-policy is executed based on the relative orientation between the preview image of the target object and the image capture frame, at this time, the terminal device may prompt the user to move the terminal device up or down in a text manner, and meanwhile, the screen reading application program in the terminal device may prompt the user to move the terminal device up or down in a voice broadcast manner. And in the process, the terminal equipment can detect the edge position of the target object in the preview image of the target object through the edge detection mechanism and acquire the edge position of the target object in the preview image of the target object, so as to judge whether the terminal equipment is in a better shooting position. After the terminal equipment is moved upwards or downwards, when the terminal equipment is located at a better shooting position in the longitudinal axis direction, the user can be prompted to move the terminal equipment up and down successfully in a text mode in the terminal equipment, at the moment, the terminal equipment can start a vibration prompt, and the user can determine that the upper position and the lower position of the target object are aligned through touch.
As shown in fig. 4B, the terminal device may obtain the relative position between the preview image of the target object and the image capture frame again, and execute the left-right adjustment sub-policy, at this time, the terminal device may prompt the user to move the terminal device left or right in a text manner, and meanwhile, the screen reading application program in the terminal device may ask the user to move the terminal device left or right in a voice broadcast manner. And in the process, the terminal equipment can detect the edge position of the target object in the preview image of the target object through the edge detection mechanism and acquire the edge position of the target object in the preview image of the target object, so that whether the terminal equipment is at a better shooting position is judged. When the terminal equipment moves leftwards or rightwards and is located at a better shooting position in the direction of the transverse axis, the terminal equipment can prompt a user to move the terminal equipment leftwards or rightwards successfully in a text mode, at the moment, the terminal equipment can start a vibration prompt, and the user can determine that the left and right positions of a target object are aligned through touch.
As shown in fig. 4C, the terminal device may obtain the relative position between the preview image of the target object and the image capture frame again, and execute the distance adjustment sub-policy, at this time, the terminal device may prompt the user to move the terminal device in the direction close to the target object or away from the target object in a text manner, and meanwhile, the screen reading application program in the terminal device may request to move the terminal device in the direction close to the target object or away from the target object in a voice broadcast manner. In the process, the terminal device can detect the edge position of the target object in the preview image of the target object through the edge detection mechanism and acquire the edge position of the target object in the preview image of the target object, so as to judge whether the terminal device is in a better shooting position. After the terminal equipment is moved in the direction close to or far away from the target object, when the terminal equipment is located at a better shooting position in the far and near direction, the terminal equipment can prompt a user that the far and near mobile terminal equipment succeeds in a text mode, at the moment, the terminal equipment can start a vibration prompt, and the user can determine that the far and near distance of the target object is proper through touch. Finally, under the condition that the up-down position, the left-right position and the distance are all suitable, the user can be prompted to stop the mobile terminal device in a text, voice or vibration mode in the terminal device.
In step S218, when the preview image of the target object is within the image capturing frame, the user is prompted to perform an image capturing operation by voice and/or vibration.
The image capturing operation may include multiple operations, for example, pressing a shooting key or controlling the terminal device to execute a shooting operation through a voice instruction, which may be specifically set according to an actual situation, and this is not limited in the embodiments of the present specification.
In step S220, when an operation instruction of image capturing triggered by a user is received, an image of a target object is captured and the image of the target object is output.
The embodiment of the specification provides an image acquisition method, in the process of acquiring an image of a target object through a camera assembly, if it is detected that a preview image of the target object is not in a preset image acquisition frame, the relative position of the preview image of the target object and the image acquisition frame is acquired, each sub-strategy in a preset adjustment strategy is sequentially executed based on the relative position of the preview image of the target object and the image acquisition frame, and a user is reminded of moving a terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame, and when the preview image of the target object is in the image acquisition frame, the image of the target object is acquired, so that the process of shooting the target object (such as a certificate of the user and the like) is decomposed into each adjustment sub-strategy, the terminal device is respectively adjusted based on the adjustment sub-strategy, a proper shooting position is finally obtained, and a more convenient and rapid image acquisition mode is provided for assisting the user to quickly shoot the user, and meanwhile, the vision position of the user is conveniently located by combining a voice and a vision protection mode of reminding the user without obstacles.
In addition, the process of shooting the target object is divided into an up-down adjustment sub-strategy, a left-right adjustment sub-strategy and a distance adjustment sub-strategy, and meanwhile, modes such as vibration feedback and the like are added, so that the vision-impaired user is further assisted to perform barrier-free scanning on the target object.
EXAMPLE III
Based on the same idea, the image capturing method provided in the embodiments of the present specification further provides an image capturing device, as shown in fig. 5.
The image acquisition device comprises: an orientation acquisition module 501, an adjustment module 502 and an image acquisition module 503, wherein:
the orientation obtaining module 501, during the process of collecting the image of the target object by the camera component, if it is detected that the preview image of the target object is not in the preset image collecting frame, obtains the relative orientation between the preview image of the target object and the image collecting frame;
an adjusting module 502, configured to sequentially execute each sub-policy in preset adjusting policies based on the relative orientation between the preview image of the target object and the image acquisition frame, and remind a user to move the device in a voice and/or vibration manner in the process of executing each sub-policy to adjust the orientation of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame;
and an image acquisition module 503, configured to acquire the image of the target object and output the image of the target object when the preview image of the target object is within the image acquisition frame.
In an embodiment of the present specification, the adjustment policy includes an up-down adjustment sub-policy, a left-right adjustment sub-policy, and a distance adjustment sub-policy.
In an embodiment of this specification, the target object is a certificate of the user.
In this embodiment of this specification, for any sub-policy in the adjustment policy, the adjusting module 502 includes:
the sub-strategy execution unit is used for executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the device according to the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
an edge position acquiring unit, which is used for acquiring the edge position of the target object in the preview image of the target object in the process that the user moves the device according to the direction indicated by the sub-strategy;
the judging unit is used for determining whether the adjusted direction of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
and if so, reminding the user to stop moving the device in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies.
In an embodiment of this specification, in a process that a user moves the apparatus in a direction indicated by the sub-policy, the edge position obtaining unit positions a position of a vertex of the target object in a preview image of the target object based on a preset corner positioning algorithm; determining an edge position of the target object in the preview image of the target object based on a position of a vertex of the target object in the preview image of the target object.
In an embodiment of this specification, the image capturing module 503 includes:
the operation reminding unit is used for reminding the user to execute image acquisition operation in a voice and/or vibration mode when the preview image of the target object is positioned in the image acquisition frame;
and the image acquisition unit is used for acquiring the image of the target object and outputting the image of the target object when receiving an image acquisition operation instruction triggered by the user.
In an embodiment of this specification, the apparatus further includes:
the included angle detection module is used for detecting whether an included angle between the target object and the designated direction in the preview image is matched with a first included angle, wherein the first included angle is an included angle between the image acquisition frame and the designated direction;
a moving mode determining module, configured to determine a moving direction and a moving angle of the device based on an included angle between the target object and the designated direction in the preview image and the first included angle if the target object and the designated direction are not matched;
and the included angle adjusting module is used for reminding a user to move the device to adjust the first included angle in a voice and/or vibration mode based on the moving direction and the moving angle of the device, so that the included angle between the target object and the appointed direction in the preview image is matched with the adjusted first included angle.
An embodiment of the present specification provides an image collecting apparatus, which, during a process of collecting an image of a target object through a camera assembly, if it is detected that a preview image of the target object is not in a preset image collecting frame, obtains a relative orientation between the preview image of the target object and the image collecting frame, sequentially executes each sub-policy of a preset adjustment policy based on the relative orientation between the preview image of the target object and the image collecting frame, and prompts a user to move a terminal device to adjust the orientation of the image collecting frame in a voice and/or vibration manner during execution of each sub-policy, so that the preview image of the target object is in the image collecting frame, and when the preview image of the target object is in the image collecting frame, collects the image of the target object, so that a process of shooting the target object (such as a certificate of a user, etc.) is decomposed into each adjustment sub-policy, and the terminal device is respectively adjusted based on the adjustment sub-policy, and finally obtains a proper shooting position, thereby providing a more convenient and fast image collecting manner for the user with assistance, and simultaneously, the user can timely prompt a prompt to prompt a user to prompt a mobile terminal device to perform vision-aided vision-positioning and perform a vision-aided vision-scanning process of a user.
In addition, the process of shooting the target object is decomposed into an up-down adjustment sub-strategy, a left-right adjustment sub-strategy and a distance adjustment sub-strategy, and meanwhile, modes such as vibration feedback and the like are added, so that the vision disorder user is further assisted to carry out barrier-free scanning on the target object.
Example four
Based on the same idea, the image capturing apparatus provided in the embodiment of the present specification further provides an image capturing device, as shown in fig. 6.
The image acquisition device may be the terminal device provided in the above embodiment.
The image capturing devices may have relatively large differences due to different configurations or performances, and may include one or more processors 601 and a memory 602, where the memory 602 may store one or more stored applications or data. Wherein the memory 602 may be transient or persistent storage. The application program stored in memory 602 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in an acquisition device for images. Still further, processor 601 may be configured to communicate with memory 602 to execute a series of computer executable instructions in memory 602 on the image capture device. The image capture device may also include one or more power supplies 603, one or more wired or wireless network interfaces 604, one or more input-output interfaces 605, and one or more keyboards 606.
In particular, in this embodiment, the image capturing device includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the image capturing device, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
in the process of acquiring an image of a target object through a camera assembly, if the fact that a preview image of the target object is not in a preset image acquisition frame is detected, acquiring the relative position of the preview image of the target object and the image acquisition frame;
sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame;
and when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object.
In an embodiment of the present specification, the adjustment policy includes an up-down adjustment sub-policy, a left-right adjustment sub-policy, and a distance adjustment sub-policy.
In an embodiment of this specification, the target object is a certificate of the user.
In an embodiment of this specification, for any one of the sub-policies in the adjustment policies, sequentially executing each of the preset adjustment policies based on the relative orientation between the preview image of the target object and the image capture frame, and prompting a user to move the terminal device in a manner of voice and/or vibration in the process of executing each of the sub-policies to adjust the orientation of the image capture frame, so that the preview image of the target object is in the image capture frame, includes:
executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
acquiring the edge position of the target object in a preview image of the target object in the process that a user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining whether the adjusted orientation of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
if so, reminding the user to stop moving the terminal equipment in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies.
In this embodiment of the present specification, in the process that the user moves the terminal device according to the direction indicated by the sub-policy, acquiring the edge position of the target object in the preview image of the target object includes:
positioning the position of the vertex of the target object in the preview image of the target object based on a preset corner positioning algorithm in the process that the user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining an edge position of the target object in the preview image of the target object based on a position of a vertex of the target object in the preview image of the target object.
In this embodiment of the present specification, the acquiring an image of the target object and outputting the image of the target object when the preview image of the target object is within the image acquisition frame includes:
when the preview image of the target object is in the image acquisition frame, reminding the user to execute image acquisition operation in a voice and/or vibration mode;
and when an image acquisition operation instruction triggered by the user is received, acquiring the image of the target object and outputting the image of the target object.
In this embodiment of this specification, before it is detected that the preview image of the target object is not within the preset image capture frame, the method further includes:
detecting whether an included angle between the target object and the designated direction in the preview image is matched with a first included angle, wherein the first included angle is an included angle between the image acquisition frame and the designated direction;
if not, determining the moving direction and the moving angle of the terminal equipment based on the included angle between the target object and the specified direction in the preview image and the first included angle;
based on the moving direction and the moving angle of the terminal equipment, the user is reminded to move the terminal equipment in a voice and/or vibration mode so as to adjust the first included angle, so that the included angle between the target object and the appointed direction in the preview image is matched with the adjusted first included angle.
An embodiment of the present specification provides an image capturing device, which, during a process of capturing an image of a target object through a camera assembly, if it is detected that a preview image of the target object is not in a preset image capturing frame, obtains a relative orientation between a preview image of the target object and the image capturing frame, sequentially executes each sub-policy of a preset adjustment policy based on the relative orientation between the preview image of the target object and the image capturing frame, and prompts a user to move the terminal device to adjust the orientation of the image capturing frame in a voice and/or vibration manner during execution of each sub-policy, so that the preview image of the target object is in the image capturing frame, and when the preview image of the target object is in the image capturing frame, the image of the target object is captured, so that a process of capturing the target object (such as a certificate of a user, etc.) is decomposed into each adjustment sub-policy, and the terminal device is respectively adjusted based on the adjustment sub-policy, and finally obtains a proper capturing position, thereby providing a more convenient and fast image capturing manner for users, and simultaneously, achieving a user-aided visual acuity positioning of the users by a visual acuity-free vision-reading target object.
In addition, the process of shooting the target object is decomposed into an up-down adjustment sub-strategy, a left-right adjustment sub-strategy and a distance adjustment sub-strategy, and meanwhile, modes such as vibration feedback and the like are added, so that the vision disorder user is further assisted to carry out barrier-free scanning on the target object.
EXAMPLE five
Further, based on the methods shown in fig. 1 and fig. 4C, one or more embodiments of the present disclosure further provide a storage medium for storing computer-executable instruction information, in a specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, or the like, and when the storage medium stores the computer-executable instruction information, the storage medium implements the following processes:
in the process of acquiring an image of a target object through a camera assembly, if the fact that a preview image of the target object is not in a preset image acquisition frame is detected, acquiring the relative position of the preview image of the target object and the image acquisition frame;
sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame;
and when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object.
In an embodiment of the present specification, the adjustment policy includes an up-down adjustment sub-policy, a left-right adjustment sub-policy, and a distance adjustment sub-policy.
In an embodiment of this specification, the target object is a certificate of the user.
In an embodiment of this specification, for any one of the sub-policies in the adjustment policies, sequentially executing each of the preset adjustment policies based on the relative orientation between the preview image of the target object and the image capture frame, and prompting a user to move the terminal device in a manner of voice and/or vibration in the process of executing each of the sub-policies to adjust the orientation of the image capture frame, so that the preview image of the target object is in the image capture frame, includes:
executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
acquiring the edge position of the target object in a preview image of the target object in the process that a user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining whether the adjusted orientation of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
if so, reminding the user to stop moving the terminal equipment in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies.
In this embodiment of the present specification, in the process that the user moves the terminal device according to the direction indicated by the sub-policy, acquiring the edge position of the target object in the preview image of the target object includes:
positioning the position of the vertex of the target object in the preview image of the target object based on a preset corner positioning algorithm in the process that the user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining an edge position of the target object in the preview image of the target object based on a position of a vertex of the target object in the preview image of the target object.
In this embodiment of the present specification, the acquiring an image of the target object and outputting the image of the target object when the preview image of the target object is within the image acquisition frame includes:
when the preview image of the target object is in the image acquisition frame, reminding the user to execute image acquisition operation in a voice and/or vibration mode;
and when an image acquisition operation instruction triggered by the user is received, acquiring the image of the target object and outputting the image of the target object.
In an embodiment of this specification, before detecting that the preview image of the target object is not within the preset image capture frame, the method further includes:
detecting whether an included angle between the target object and the designated direction in the preview image is matched with a first included angle, wherein the first included angle is an included angle between the image acquisition frame and the designated direction;
if not, determining the moving direction and the moving angle of the terminal equipment based on the included angle between the target object and the specified direction in the preview image and the first included angle;
based on the moving direction and the moving angle of the terminal equipment, the user is reminded to move the terminal equipment in a voice and/or vibration mode so as to adjust the first included angle, so that the included angle between the target object and the appointed direction in the preview image is matched with the adjusted first included angle.
An embodiment of the present specification provides a storage medium, which, in a process of acquiring an image of a target object through a camera assembly, if it is detected that a preview image of the target object is not in a preset image acquisition frame, acquires a relative position between a preview image of the target object and the image acquisition frame, and sequentially executes each sub-policy of a preset adjustment policy based on the relative position between the preview image of the target object and the image acquisition frame, and prompts a user to move a terminal device to adjust the position of the image acquisition frame in a voice and/or vibration manner in the process of executing each sub-policy, so that the preview image of the target object is in the image acquisition frame, and when the preview image of the target object is in the image acquisition frame, the image of the target object is acquired, so that a process of shooting the target object (such as a user's certificate and the like) is decomposed into each adjustment sub-policy, the terminal device is respectively adjusted based on the adjustment sub-policy, and a suitable shooting position is finally obtained, thereby providing a more convenient and fast image acquisition manner for a user with a vision obstruction, and simultaneously, the user is timely reminded of timely reminding the user of a visual acuity positioning of the mobile terminal device in a voice and/or vibration feedback manner, thereby facilitating the user to scan of the target object without visual obstruction and further facilitating the user to scan the user without the vision obstruction.
In addition, the process of shooting the target object is decomposed into an up-down adjustment sub-strategy, a left-right adjustment sub-strategy and a distance adjustment sub-strategy, and meanwhile, modes such as vibration feedback and the like are added, so that the vision disorder user is further assisted to carry out barrier-free scanning on the target object.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 90's of the 20 th century, improvements to a technology could clearly distinguish between improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements to process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical blocks. For example, a Programmable Logic Device (PLD) (e.g., a Field Programmable Gate Array (FPGA)) is an integrated circuit whose Logic functions are determined by a user programming the Device. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as ABEL (Advanced Boolean Expression Language), AHDL (alternate Hardware Description Language), traffic, CUPL (core universal Programming Language), HDCal, jhddl (Java Hardware Description Language), lava, lola, HDL, PALASM, rhyd (Hardware Description Language), and vhigh-Language (Hardware Description Language), which is currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present description are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable fraud case serial-parallel apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable fraud case serial-parallel apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable fraud case to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable fraud case serial-parallel apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present disclosure, and is not intended to limit the present disclosure. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (8)

1. An image acquisition method is applied to a terminal device and applied to a scene of identity verification of a user with visual impairment, and comprises the following steps:
in the process of acquiring an image of a target object through a camera assembly, detecting whether an included angle between the target object and an appointed direction in a preview image is matched with a first included angle, wherein the first included angle is an included angle between an image acquisition frame and the appointed direction;
if not, determining the moving direction and the moving angle of the terminal equipment based on the included angle between the target object and the specified direction in the preview image and the first included angle;
reminding a user to move the terminal equipment in a voice and/or vibration mode to adjust the first included angle based on the moving direction and the moving angle of the terminal equipment, so that the included angle between the target object and the appointed direction in the preview image is matched with the adjusted first included angle;
if the preview image of the target object is detected not to be in a preset image acquisition frame, acquiring the relative position of the preview image of the target object and the image acquisition frame;
sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame;
when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object;
for any sub-strategy in the adjustment strategies, sequentially executing each sub-strategy in a preset adjustment strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame so that the preview image of the target object is in the image acquisition frame, including:
executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
acquiring the edge position of the target object in a preview image of the target object in the process that a user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining whether the adjusted orientation of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
if so, reminding the user to stop moving the terminal equipment in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies;
the obtaining of the edge position of the target object in the preview image of the target object by the user in the process of moving the terminal device according to the direction indicated by the sub-policy includes:
positioning the position of the vertex of the target object in the preview image of the target object based on a preset corner positioning algorithm in the process that the user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining an edge position of the target object in the preview image of the target object based on the position of the vertex of the target object in the preview image of the target object.
2. The method of claim 1, the adjustment strategies comprising an up-down adjustment sub-strategy, a left-right adjustment sub-strategy, and a distance adjustment sub-strategy.
3. The method of claim 1 or 2, the target object being a credential of the user.
4. The method of claim 1, the capturing an image of the target object and outputting the image of the target object when the preview image of the target object is within the image capture frame, comprising:
when the preview image of the target object is in the image acquisition frame, reminding the user to execute image acquisition operation in a voice and/or vibration mode;
and when an image acquisition operation instruction triggered by the user is received, acquiring the image of the target object and outputting the image of the target object.
5. An image acquisition device applied to a scene of identity verification of a user with visual impairment, the device comprises:
the orientation acquisition module is used for acquiring the relative orientation of a preview image of a target object and an image acquisition frame if the preview image of the target object is detected not to be in a preset image acquisition frame in the process of acquiring the image of the target object through a camera assembly;
the adjusting module is used for sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the device in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame;
the image acquisition module is used for acquiring the image of the target object and outputting the image of the target object when the preview image of the target object is in the image acquisition frame;
the device further comprises:
the included angle detection module is used for detecting whether an included angle between the target object and the designated direction in the preview image is matched with a first included angle, wherein the first included angle is an included angle between the image acquisition frame and the designated direction;
the moving mode determining module is used for determining the moving direction and the moving angle of the device based on the included angle between the target object and the specified direction in the preview image and the first included angle if the target object and the specified direction in the preview image are not matched;
the included angle adjusting module is used for reminding a user to move the device in a voice and/or vibration mode to adjust the first included angle based on the moving direction and the moving angle of the device, so that the included angle between the target object and the specified direction in the preview image is matched with the adjusted first included angle;
for any sub-policy of the tuning policies, the tuning module comprises:
the sub-strategy executing unit is used for executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the device according to the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
an edge position acquiring unit, which is used for acquiring the edge position of the target object in the preview image of the target object in the process that the user moves the device according to the direction indicated by the sub-strategy;
the judging unit is used for determining whether the adjusted direction of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
if so, reminding the user to stop moving the device in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies;
the edge position acquisition unit is used for positioning the position of the vertex of the target object in the preview image of the target object based on a preset corner positioning algorithm in the process that the user moves the device according to the direction indicated by the sub-strategy; determining an edge position of the target object in the preview image of the target object based on the position of the vertex of the target object in the preview image of the target object.
6. The apparatus of claim 5, the adjustment strategies comprising an up-down adjustment sub-strategy, a left-right adjustment sub-strategy, and a distance adjustment sub-strategy.
7. An image acquisition device applied to a scene of identity verification of a user with vision impairment, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in the process of acquiring an image of a target object through a camera assembly, detecting whether an included angle between the target object and an appointed direction in a preview image is matched with a first included angle, wherein the first included angle is an included angle between an image acquisition frame and the appointed direction;
if not, determining the moving direction and the moving angle of the terminal equipment based on the included angle between the target object and the specified direction in the preview image and the first included angle;
reminding a user to move the terminal equipment in a voice and/or vibration mode to adjust the first included angle based on the moving direction and the moving angle of the terminal equipment, so that the included angle between the target object and the appointed direction in the preview image is matched with the adjusted first included angle;
if the preview image of the target object is detected not to be in a preset image acquisition frame, acquiring the relative position of the preview image of the target object and the image acquisition frame;
sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the image acquisition equipment in a voice and/or vibration mode in the process of executing each sub-strategy so as to adjust the position of the image acquisition frame, so that the preview image of the target object is in the image acquisition frame;
when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object;
for any sub-strategy in the adjustment strategies, sequentially executing each sub-strategy in a preset adjustment strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame so that the preview image of the target object is in the image acquisition frame, including:
executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
acquiring the edge position of the target object in a preview image of the target object in the process that a user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining whether the adjusted orientation of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
if so, reminding the user to stop moving the terminal equipment in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies;
the obtaining of the edge position of the target object in the preview image of the target object by the user in the process of moving the terminal device according to the direction indicated by the sub-policy includes:
positioning the position of the vertex of the target object in the preview image of the target object based on a preset corner positioning algorithm in the process that the user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining an edge position of the target object in the preview image of the target object based on the position of the vertex of the target object in the preview image of the target object.
8. A storage medium for use in a scenario of identity verification for a user with vision impairment, the storage medium storing computer-executable instructions that, when executed, implement the following:
in the process of acquiring an image of a target object through a camera assembly, detecting whether an included angle between the target object and an appointed direction in a preview image is matched with a first included angle, wherein the first included angle is an included angle between an image acquisition frame and the appointed direction;
if not, determining the moving direction and the moving angle of the terminal equipment based on the included angle between the target object and the specified direction in the preview image and the first included angle;
reminding a user to move the terminal equipment in a voice and/or vibration mode to adjust the first included angle based on the moving direction and the moving angle of the terminal equipment, so that the included angle between the target object and the appointed direction in the preview image is matched with the adjusted first included angle;
if the preview image of the target object is detected not to be in a preset image acquisition frame, acquiring the relative position of the preview image of the target object and the image acquisition frame;
sequentially executing each sub-strategy in preset adjusting strategies based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user of a mobile terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame so that the preview image of the target object is in the image acquisition frame;
when the preview image of the target object is in the image acquisition frame, acquiring the image of the target object and outputting the image of the target object;
for any sub-strategy in the adjustment strategies, sequentially executing each sub-strategy in a preset adjustment strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal device in a voice and/or vibration mode in the process of executing each sub-strategy to adjust the position of the image acquisition frame so that the preview image of the target object is in the image acquisition frame, including:
executing the sub-strategy based on the relative position of the preview image of the target object and the image acquisition frame, and reminding a user to move the terminal equipment in the direction indicated by the sub-strategy in a voice mode so as to adjust the position of the image acquisition frame;
acquiring the edge position of the target object in a preview image of the target object in the process that a user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining whether the adjusted orientation of the image acquisition frame meets the condition corresponding to the sub-strategy or not based on the edge position of the target object in the preview image of the target object;
if so, reminding the user to stop moving the terminal equipment in a vibration mode, and acquiring the relative position of the preview image of the current target object and the image acquisition frame to execute the next sub-strategy in the adjustment strategies;
the obtaining of the edge position of the target object in the preview image of the target object by the user in the process of moving the terminal device according to the direction indicated by the sub-policy includes:
positioning the position of the vertex of the target object in the preview image of the target object based on a preset corner positioning algorithm in the process that the user moves the terminal equipment according to the direction indicated by the sub-strategy;
determining an edge position of the target object in the preview image of the target object based on the position of the vertex of the target object in the preview image of the target object.
CN202110649617.6A 2021-06-10 2021-06-10 Image acquisition method, device and equipment Active CN113411477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110649617.6A CN113411477B (en) 2021-06-10 2021-06-10 Image acquisition method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110649617.6A CN113411477B (en) 2021-06-10 2021-06-10 Image acquisition method, device and equipment

Publications (2)

Publication Number Publication Date
CN113411477A CN113411477A (en) 2021-09-17
CN113411477B true CN113411477B (en) 2023-03-10

Family

ID=77683596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110649617.6A Active CN113411477B (en) 2021-06-10 2021-06-10 Image acquisition method, device and equipment

Country Status (1)

Country Link
CN (1) CN113411477B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116567385A (en) * 2023-06-14 2023-08-08 深圳市宗匠科技有限公司 Image acquisition method and image acquisition device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2958086A1 (en) * 2014-06-18 2015-12-23 OVD Kinegram AG Method for testing a security document
CN107483816A (en) * 2017-08-11 2017-12-15 西安易朴通讯技术有限公司 Image processing method, device and electronic equipment
CN110138999A (en) * 2019-05-30 2019-08-16 苏宁金融服务(上海)有限公司 A kind of papers-scanning method and device for mobile terminal
CN110263775A (en) * 2019-05-29 2019-09-20 阿里巴巴集团控股有限公司 Image-recognizing method, device, equipment and authentication method, device, equipment
CN111131702A (en) * 2019-12-25 2020-05-08 航天信息股份有限公司 Method and device for acquiring image, storage medium and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578020B (en) * 2014-10-14 2020-04-03 深圳富泰宏精密工业有限公司 Self-timer system and method
CN105956528A (en) * 2016-04-22 2016-09-21 沈洪泉 Man-machine interface system used for guiding and indicating mobile terminal iris identification
CN108307120B (en) * 2018-05-11 2020-07-17 阿里巴巴(中国)有限公司 Image shooting method and device and electronic terminal
CN111163261B (en) * 2019-12-25 2022-03-01 上海肇观电子科技有限公司 Target detection method, circuit, visual impairment assistance device, electronic device, and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2958086A1 (en) * 2014-06-18 2015-12-23 OVD Kinegram AG Method for testing a security document
CN107483816A (en) * 2017-08-11 2017-12-15 西安易朴通讯技术有限公司 Image processing method, device and electronic equipment
CN110263775A (en) * 2019-05-29 2019-09-20 阿里巴巴集团控股有限公司 Image-recognizing method, device, equipment and authentication method, device, equipment
CN110138999A (en) * 2019-05-30 2019-08-16 苏宁金融服务(上海)有限公司 A kind of papers-scanning method and device for mobile terminal
CN111131702A (en) * 2019-12-25 2020-05-08 航天信息股份有限公司 Method and device for acquiring image, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113411477A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CA3043230C (en) Face liveness detection method and apparatus, and electronic device
CN110688951B (en) Image processing method and device, electronic equipment and storage medium
CN105868677B (en) Living body face detection method and device
US10824849B2 (en) Method, apparatus, and system for resource transfer
US9049379B2 (en) Apparatus and method for recognizing image
TW201903644A (en) Facial recognition method and apparatus and imposter recognition method and apparatus
JP2022542668A (en) Target object matching method and device, electronic device and storage medium
CN109829863A (en) Image processing method and device, electronic equipment and storage medium
US9672523B2 (en) Generating barcode and authenticating based on barcode
KR20210065178A (en) Biometric detection method and device, electronic device and storage medium
CN108174082B (en) Image shooting method and mobile terminal
CN115439851A (en) Method, system and equipment for verifying certificate image to be identified
WO2022099989A1 (en) Liveness identification and access control device control methods, apparatus, electronic device, storage medium, and computer program
CN113411477B (en) Image acquisition method, device and equipment
CN111160251B (en) Living body identification method and device
CN112333356A (en) Certificate image acquisition method, device and equipment
KR101457377B1 (en) User authetication method based on mission gesture recognition, and computer-readable recording medium with user authetication program based on mission gesture recognition
US11553216B2 (en) Systems and methods of facilitating live streaming of content on multiple social media platforms
CN109873823B (en) Verification method and device, electronic equipment and storage medium
CN112417998A (en) Method and device for acquiring living body face image, medium and equipment
KR102539533B1 (en) Method and apparatus for preventing other people from photographing identification
JP2012243266A (en) Electronic apparatus and display method
CN106780513B (en) The method and apparatus of picture conspicuousness detection
JP5017905B2 (en) User authentication method, user authentication device, and computer program
CN117237682A (en) Certificate verification method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant