CN109618100B - Method, device and system for judging field shooting image - Google Patents

Method, device and system for judging field shooting image Download PDF

Info

Publication number
CN109618100B
CN109618100B CN201910037990.9A CN201910037990A CN109618100B CN 109618100 B CN109618100 B CN 109618100B CN 201910037990 A CN201910037990 A CN 201910037990A CN 109618100 B CN109618100 B CN 109618100B
Authority
CN
China
Prior art keywords
image
guide
target image
light spot
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910037990.9A
Other languages
Chinese (zh)
Other versions
CN109618100A (en
Inventor
刘海敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910037990.9A priority Critical patent/CN109618100B/en
Publication of CN109618100A publication Critical patent/CN109618100A/en
Application granted granted Critical
Publication of CN109618100B publication Critical patent/CN109618100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a method, a device and a system for judging a field shot image, which relate to the technical field of identity authentication, and the method comprises the following steps: responding to a shooting instruction, and providing a guide instruction; judging whether the light spots on the target image are matched with the guide indication; the target image comprises a target object, and the light spot is caused when the target object is illuminated; if yes, determining the target image as a live shooting image. The invention can better ensure that the acquired image is shot on site, and can effectively improve the reliability of identity authentication.

Description

Method, device and system for judging field shooting image
Technical Field
The invention relates to the technical field of identity authentication, in particular to a method, a device and a system for judging a field shot image.
Background
When a user transacts various services such as bank card opening, identity authentication and the like by using a mobile phone, identity authentication is required, and the conventional main authentication mode is to require the user to upload certificate information, such as identity card images and the like. However, it is difficult to ensure that the uploaded document image is shot on site, such as when transacting mobile phone card service on the internet, lawless persons upload the illegally stolen identity card image of another person and transact the service by using the identity card image of another person. Under the condition that the certificate image is shot off-site, the possibility that lawbreakers steal the certificate information of other people during business handling is greatly increased, and the reliability of the authentication mode is poor.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a method, an apparatus, and a system for determining a live-shot image, which can better ensure that an acquired image is shot live, and can effectively improve the reliability of authentication.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for determining a live image, where the method includes: responding to a shooting instruction, and providing a guide instruction; judging whether the light spots on the target image are matched with the guide indication; the target image comprises a target object, and the light spot is caused when the target object is illuminated; if yes, determining the target image as a live shooting image.
Further, the step of providing a guidance indication comprises: displaying a guide identifier, a guide path and/or a guide language; and/or broadcasting the guide words in voice.
Further, the step of displaying the guide identifier includes: displaying a sample image; setting a guide identifier at a designated position on the sample image; the step of judging whether the light spot on the target image is matched with the guide indication comprises the following steps: judging whether the position of the light spot on the target image is matched with the specified position on the sample image; if so, determining that the spot matches the guidance indication.
Further, the step of determining whether the position of the light spot on the target image matches the designated position on the sample image includes: acquiring a first relative position between the guide mark and a first reference point of the sample image and a second relative position between the light spot and a second reference point of the target image; wherein the position of the first reference point on the sample image is the same as the position of the second reference point on the target image; judging whether the difference between the first relative position and the second relative position is within a preset difference threshold value; and if so, determining that the position of the light spot on the target image is matched with the specified position on the sample image.
Further, the step of displaying the guide path includes: displaying a guidance path in the designated area; the number of the target images is multiple; the step of judging whether the light spot on the target image is matched with the guide indication comprises the following steps: tracking the light spots on the target images to obtain the motion tracks of the light spots; judging whether the motion trail of the light spot is matched with a guide path in the designated area or not; if so, determining that the spot matches the guidance indication.
Further, the guiding words comprise position information of the light spots or motion track information of the light spots; the step of judging whether the light spot on the target image is matched with the guide indication comprises the following steps: if the guide words contain the position information of the light spots, judging whether the positions of the light spots on the target image are matched with the position information of the light spots; if yes, determining that the light spot is matched with the guiding indication; if the guide words contain the motion trail information of the light spots, tracking the light spots on the target images to obtain the motion trail of the light spots; judging whether the motion trail of the light spot is matched with the motion trail information of the light spot; if so, determining that the spot matches the guidance indication.
Further, the step of tracking the light spots on the plurality of target images to obtain the motion trail of the light spots includes: and generating the motion trail of the light spot according to the position of the light spot in each target image and the acquisition time of each target image.
Further, the light spot on the target image is determined by the following steps: performing brightness detection on the target image to obtain the brightness value of each pixel point in the target image; determining a brightness area on the target image according to the position of the pixel point with the brightness value larger than a preset brightness threshold; judging whether the radius of a circumscribed circle of the brightness area is within a preset radius interval or not; and if so, determining the brightness area as the light spot.
Further, the target object is a certificate; the method further comprises the following steps: and performing character recognition on the target image by adopting an optical character recognition technology to obtain certificate information of the target object.
In a second aspect, an embodiment of the present invention further provides a device for determining a live-shot image, where the device includes: the instruction response module is used for responding to the shooting instruction and providing a guide instruction; the judging module is used for judging whether the light spots on the target image are matched with the guide indication; the target image comprises a target object, and the light spot is caused when the target object is illuminated; and the scene shooting determining module is used for determining the target image as a scene shooting image when the judgment result of the judging module is yes.
In a third aspect, an embodiment of the present invention provides a system for determining a live-shot image, where the system includes: the device comprises an image acquisition device, a light-emitting device, a processor and a storage device; the image acquisition device is used for acquiring a target image; the light-emitting device is used for causing light spots when irradiating the target object; the storage means having stored thereon a computer program which, when executed by the processor, performs the method of any of the first aspects described above.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
The embodiment of the invention provides a method, a device and a system for judging a field shooting image, which are used for responding to a shooting instruction and providing a guide instruction; then judging whether the light spots on the target image are matched with the guide indication; if yes, determining the target image as a live shooting image. The embodiment judges whether the light spot (caused by illumination) on the target image is matched with the provided guide indication, and can better ensure that the acquired image is shot on site, thereby effectively improving the reliability of identity verification.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for determining a live image according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a display interface of a guidance indicator according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a display interface of a first guiding path according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a display interface of a second guiding path provided by an embodiment of the invention;
FIG. 6 is a schematic diagram of a display interface of a guide language provided by an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a display interface for broadcasting a guidance language by voice according to an embodiment of the present invention;
fig. 8 is a block diagram illustrating a structure of a device for determining live-shot images according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method and the device consider that the authentication mode in the existing business handling is difficult to ensure that the document image uploaded by the user is shot on site, so that the possibility that lawless persons steal the document information of other persons during business handling is increased, and the reliability of the authentication mode is poor. Based on this, the embodiment of the present invention provides a method, an apparatus, and a system for determining a live image, where the technology may be applied to any scene in which it is necessary to determine whether an image is live, such as a mobile terminal APP that requires a user to upload certificate information, and may also be directly applied to business machines that require user certificate information acquisition and are set by various organizations such as banks and vehicle administration offices. For ease of understanding, the following detailed description will discuss embodiments of the present invention.
The first embodiment is as follows:
first, an example electronic device 100 for implementing the determination method, apparatus, and system of live-shot images according to the embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components.
Exemplary electronic devices for implementing a method, an apparatus and a system for determining a live image according to embodiments of the present invention may be implemented on smart terminals such as smart phones, tablet computers, ATMs and the like.
Example two:
referring to a flowchart of a method for determining a live-action image shown in fig. 2, the method may be applied to an image capturing device (also referred to as an image capturing apparatus) such as a mobile phone, a camera, and the like, and the image capturing device is configured with a light emitting device (also referred to as a light emitting apparatus) such as a flashlight, a laser emitter, and the like, and the light emitting device may be integrated with the image capturing device or may be a separate structure. The method specifically comprises the following steps:
in step S202, a guidance instruction is provided in response to the shooting instruction.
In this embodiment, the shooting instruction may be shooting request information input by a user to the image capturing apparatus through a screen touch, a key operation, a voice operation, or the like, such as clicking a preset "shooting" icon. When the shooting instruction is responded, the image acquisition equipment can start the camera and the light-emitting device, the camera and the light-emitting device are both aligned to the target object, and the guidance instruction is provided through a display interface or a player of the image acquisition equipment. The camera can be used for collecting a target image containing a target object; the light emitting device may be used to illuminate the target object in the form of a beam or a laser, causing a spot on the target object. In practical applications, a user changes the light beam irradiation direction of the light-emitting device or moves the target object according to the guidance instruction, so that the light spot caused when the light-emitting device irradiates the target object reaches a specified position or a specified motion track in cooperation with the guidance instruction.
In some possible implementations, the guidance instruction may be a preset guidance identifier, a guidance path, or a guidance phrase, and the guidance instruction is stored in the database, and when a shooting instruction is received, for example, one or more guidance instructions may be randomly selected from a plurality of guidance instructions in the database based on a generator function provided in JavaScript ES6 and provided to the user.
The target object may include a certificate or a living body, wherein the certificate may be an identification card, a real estate certificate, a driving license, a academic certificate, a social security card, a business card, and the like, the living body may be a human, but may also be an animal in practical applications, such as a pet dog, a pet cat, and the like when registering a pet.
Step S204, judging whether the light spot on the target image is matched with the guide instruction; the target image comprises a target object, and the light spots are caused when the target object is illuminated. For example, the light spot may be caused when the light emitting device described above illuminates the target object.
In some embodiments, the light spot caused by the light emitting device irradiating the target object may be formed by reflecting the light beam emitted by the light emitting device on the surface of the target object, and the light spot is formed in a manner that the requirements on the light emitting device are low, such as a flashlight carried by a mobile phone. The light spot may be formed by a light beam emitted by a light-emitting device (such as a laser) directly on the target object, such as a laser spot irradiated on the target object, and the light spot can be formed in a manner suitable for more target objects.
After the light spots are caused on the target object, the target object and the light spots on the target object are shot through the camera to obtain a target image. Based on the movable light spot on the target object, the user cannot directly upload a target image (such as an identification card image) of another person illegally used, but needs to move the target object or change the light beam irradiation direction of the light-emitting device according to a randomly-appearing guide instruction so as to match the position of the light spot on the target image or the motion track of the light spot with the guide instruction, thereby completing the process of on-site shooting.
Furthermore, the types of the guide instructions are different, and the judgment modes of whether the light spots are matched with the guide instructions are different; for example, if the type of the guidance indication is a guidance mark, it can be determined whether the position of the light spot on the target image matches the position of the guidance mark; and if the type of the guide indication is a guide path, judging whether the light spot motion track on the target image is matched with the guide path.
And step S206, when the judgment result is yes, determining the target image as a live shooting image.
In a possible implementation manner, when the determination result is yes, a prompt message such as a display icon of "√" or a character of "successful matching" can be generated on the display interface of the image acquisition device, or a voice prompt can be broadcast; it can be understood that, when the determination result is negative, the above manner may also be referred to for prompting, for example, a text prompt "failure in matching" is used on the interface, and details are not described here.
The method for judging the field shot image provided by the embodiment of the invention comprises the steps of firstly responding to a shooting instruction and providing a guide instruction; then judging whether the light spots on the target image are matched with the guide indication; if yes, determining the target image as a live shooting image. The embodiment judges whether the light spot (caused by illumination) on the target image is matched with the provided guide indication, and can better ensure that the acquired image is shot on site, thereby effectively improving the reliability of identity verification.
This embodiment provides a specific implementation manner for determining a light spot on a target image: firstly, brightness detection is carried out on a target image to obtain the brightness value of each pixel point in the target image. And then determining a brightness area on the target image according to the position of the pixel point with the brightness value larger than the preset brightness threshold value. Then judging whether the radius of the circumscribed circle of the brightness area is within a preset radius interval (for example, r is more than or equal to 2mm and less than or equal to 5 mm); if yes, determining the brightness area as a light spot; if not, indicating that the brightness region may be noise or other interference patterns on the target object, etc., the brightness region cannot be determined as a spot. Further, it is also possible to calculate the center point coordinates (x1, y1) of the spot with the center point or the designated vertex of the target image as the origin of coordinates, and to represent the position of the spot with the center point coordinates (x1, y1) of the spot.
For ease of understanding, the manner of providing the guidance instruction is further described in the present embodiment, reference may be made to the following manners one to four:
the first method is as follows: and displaying the guide identification. In particular implementations, the guidance indicator may be displayed on a display interface. For easy understanding, reference may be made to a schematic display interface of the guidance sign shown in fig. 3, where the display interface is, for example, a mobile phone screen, a computer screen, or the like, and an upper portion of the display interface is a collection area of the target image, and is used for displaying the target image collected by the camera; the lower part of the display interface is a display area for guiding instructions, and a sample image is displayed in the area; the guide mark is set at a specified position on the sample image. The example image is an identity card, and the guide identifier is exemplarily described as follows: the guide mark may be a mark having a certain color and shape, such as a circular white spot similar to the outline of the light spot, and the circular white spot (i.e. the guide mark) is arranged on the identification card such as "ethnic group: the rear side of the Chinese character, a certain vertex of the identity card, the right lower side of the photo and the like.
Based on the above guidance indicator, this embodiment provides a specific implementation manner for determining whether the light spot on the target image matches the guidance indicator, and the following steps may be referred to: judging whether the position of the light spot on the target image is matched with the specified position on the sample image or not; if so, it is determined that the spot matches the guidance indication.
Because the guide mark is preset at the designated position on the sample image, if the guide mark is positioned at the left position of the name displayed on the sample identity card, if the light spot is detected to appear at the left position of the name on the target image, the light spot is matched with the guide mark; that is, if the relative position of the light spot on the target image is the same as the relative position of the guide mark on the sample image, the target image can be determined to be a live-shot image. In the specific implementation, reference can be made to the following steps (1) and (2):
(1) a first relative position between the guide marker and a first reference point of the sample image and a second relative position between the spot and a second reference point of the target image are acquired. Wherein the position of the first reference point on the sample image is the same as the position of the second reference point on the target image; for example, the first reference point is the vertex at the lower left corner of the sample image, and the second reference point is the vertex at the lower left corner of the target image. The first reference point may be represented as an origin P (0,0) in a rectangular coordinate system, a polar coordinate system, or another coordinate system, and the first relative position between the guide marker and the first reference point of the sample image may be represented as P (x) in the rectangular coordinate system1,y1) Or P (rho) in a polar coordinate system11) (ii) a Correspondingly, the second reference point may be represented as an origin Q (0,0) in a rectangular coordinate system, a polar coordinate system, or other coordinate systems, and the second relative position between the light spot and the second reference point of the target image may be represented as Q (x) in the rectangular coordinate system2,y2) Or Q (rho) in a polar coordinate system22)。
(2) Judging whether the difference between the first relative position and the second relative position is within a preset difference threshold value; if so, determining that the position of the light spot on the target image is matched with the designated position on the sample image, namely the characterization light spot is matched with the guide identifier.
For the sake of understanding, the following first relative position P (x) in a rectangular coordinate system is used1,y1) Second relative position Q (x)2,y2) The difference between the first relative position and the second relative position is described for the sake of example. The difference may be expressed as (Δ x, Δ y) ═ (| x1-x2|, | y1-y2|), and the preset difference threshold may be (Δ x, Δ y) ═ y*,Δy*) The difference between the first relative position and the second relative position within the predetermined difference threshold can be understood as Δ x ≦ Δ x*And Δ y is less than or equal to Δ y*
It can be understood that, in practical applications, in order to avoid the light spot from blocking the key information of the target object, the guide identifier may avoid a portion of the sample image corresponding to the key information of the target object, such as an identification card number on the identification card, a facial feature of a human face, and the like.
The second method comprises the following steps: and displaying the guide path. The guiding path may adopt any path such as a line segment, a triangle, a quadrangle, a circle, a digital pattern, etc., such as a display interface diagram referring to the first guiding path shown in fig. 4, the guiding indication provided on the display interface is a circular guiding path, and the user is instructed to move the light beam irradiation direction of the light-emitting device, so that the motion trajectory of the light spot formed on the target object is also a circle, that is, the motion trajectory of the light spot matches the guiding path.
Based on the above guidance path, the number of the acquired target images is multiple, and the positions of the light spots on different images are different, so as to form a motion track matched with the guidance path, in this embodiment, a specific implementation manner for determining whether the light spots on the target images are matched with the guidance instruction is provided, and the following steps may be referred to: tracking the light spots on the multiple target images to obtain the motion tracks of the light spots; judging whether the motion track of the light spot is matched with a guide path in the designated area or not; if so, it is determined that the spot matches the guidance indication.
The motion trail of the light spot can be generated as follows: and generating the motion trail of the light spot according to the position of the light spot in each target image and the acquisition time of each target image. In specific implementation, the light spots on each target image can be tracked according to the acquisition time of each target image, and the tracks among the light spots are drawn in a segmented manner.
Referring to the circular guide path shown in fig. 4, a target object may be video-captured by the camera during the movement of the light spot, in order to reduce workload, a part of frame images may be extracted from the captured video stream according to a preset number of frame intervals, the light spot may be tracked on the extracted frame images, and if the tracked movement locus of the light spot can form a circle, it indicates that the light spot matches the guide instruction, and further, it may be determined that the target object is a live-shot image.
In addition, in order to further ensure that the uploaded image is shot on site and enhance the reliability of identity authentication, a guide path can be combined with the sample image, wherein the guide path can be composed of a plurality of guide identifiers, and each guide identifier is arranged at a designated position on the sample image. Referring to the display interface schematic diagram of the second guiding path shown in fig. 5, the guiding path in a quadrilateral shape may be composed of guiding marks arranged at four vertices of the identification card sample image. Based on this, the present embodiment provides a specific implementation manner for determining whether the light spot on the target image matches the guidance instruction, and may refer to the following steps: firstly, tracking light spots on a plurality of target images, and judging whether the position of the light spot on each target image on the target image to which the light spot belongs is matched with the position of the corresponding specified position of each guide mark in the guide path (such as judging whether the light spot is positioned at four top angles on the target image); if so, determining that the motion track of the light spot is matched with a guide path in the sample image; for example, the movement locus of the spot formation on the plurality of target images is determined to be a quadrangle matched with the guide path. So that the target image can be determined as a live shot image.
It can be understood that, in order to avoid the light spot from blocking the key information of the target object, the guide path should avoid a portion of the sample image corresponding to the key information of the target object, such as an identification card number on an identification card, a facial feature of a human face, and the like.
The third method comprises the following steps: the guidance phrase may be displayed by referring to a display interface schematic diagram of a guidance phrase shown in fig. 6, where the guidance phrase shown in fig. 6 includes position information of the light spot, and instructs the user to move the light spot to a specified position of the target object. Based on the guidance language, the determination manner of whether the light spot is matched with the guidance instruction may include: judging whether the position of the light spot on the target image is matched with the light spot position information or not; if so, it is determined that the spot matches the guidance indication. The guide words are preset, and light spot position information expressed by the guide words can be directly acquired. The mode of determining the position of the light spot on the target image and the mode of judging whether the position of the light spot on the target image is matched with the light spot position information can both refer to the mode one, and are not described herein again.
The introductory phrases may also include information on the movement locus of the light spot, such as "please move the light spot to make the movement locus of the light circular". Based on the guidance language, the determination manner of whether the light spot is matched with the guidance instruction may include: tracking the light spots on the multiple target images to obtain the motion tracks of the light spots; judging whether the motion trail of the light spot is matched with the motion trail information of the light spot; if so, it is determined that the spot matches the guidance indication. The guide words are preset, and the motion track information of the light spots expressed by the guide words can be directly acquired. The mode of determining the motion trajectory of the light spot and the mode of judging whether the motion trajectory of the light spot matches the motion trajectory information of the light spot can both refer to the mode two, and details are not repeated here.
It is to be understood that the above is merely exemplary, and that the introductory phrases may include other content as well, and are not limited thereto.
The method is as follows: and broadcasting the guide words through voice. There are many broadcasting ways of the guidance language, such as the following A, B, C: A. and broadcasting the guide words once when the shooting instruction is received. B. And after receiving the shooting instruction, repeatedly broadcasting the guide words according to a preset time interval (such as 3 seconds) until a result of judging whether the movement track of the light spot is matched with the movement track information of the light spot is obtained. C. The method comprises the steps that operation marks such as keys and touch icons for repeated broadcasting or interruption broadcasting are arranged on the image acquisition equipment, and after one-time guidance language broadcasting is completed, a user can select the operation marks to feed back to the image acquisition equipment to continue broadcasting the guidance language or interrupt the broadcasting guidance language.
When the method of broadcasting the guide words by voice is adopted, only the acquisition area of the target image can be set; or based on the broadcasting mode C, an operation identifier is added to a blank area beside the acquisition area of the target image, for example, as shown in fig. 7, a display interface diagram of the voice broadcasting guidance language, and the operation identifier displayed on the interface indicates that the guidance language is being broadcasted.
In practical applications, the capture area of the target image appearing in the above four modes may be a screen interface of a mobile phone, a viewfinder of a digital camera, and the like, and is not limited herein. The above four ways are merely exemplary illustrations of the guidance instruction and the way of determining that the light spot matches the guidance instruction, and should not be construed as limiting.
In addition, the method for judging the live shooting image may further include: and setting a specified verification time, judging whether the user completes the matching judgment of the facula and the guide instruction within the specified verification time, and if not, re-executing the step S202 to the step S206. When step S202 is executed again, the provided guidance instruction is updated, so that the new guidance instruction is different from the previous guidance instruction.
Further, the target object may be a certificate, and after the target image is determined to be a live image, the method for determining the live image may further include: and performing Character Recognition on the target image by adopting an Optical Character Recognition technology (OCR) to obtain certificate information of the target object. In a specific implementation, the target image may be first preprocessed, which includes geometric transformation (perspective, warping, rotation, etc.), distortion correction, deblurring, image enhancement, and ray correction, etc. And then detecting the position, the range and the layout of the text in the preprocessed target image to obtain a frame image containing the text. And finally, identifying the content of the frame image containing the text to obtain the text information in the frame image, such as name, gender, ethnicity, family address, identity card number and the like.
In summary, the method for determining a live-shot image according to the embodiment can better ensure that the acquired image is shot in the field by determining whether the light spot (caused by illumination) on the target image is matched with the provided guidance instruction, so as to effectively improve the reliability of the authentication.
Example three:
referring to fig. 8, a block diagram of a device for determining live-shot images is shown, the device including:
an instruction response module 802, configured to provide a guidance instruction in response to the shooting instruction;
a judging module 804, configured to judge whether a light spot on the target image matches the guidance instruction; the target image comprises a target object, and the light spot is caused when the target object is illuminated;
and a live shooting determining module 806, configured to determine that the target image is a live shooting image when a determination result of the determining module is yes.
The embodiment of the invention provides a judging device for on-site image shooting, which firstly provides a guide instruction when responding to a shooting instruction; then judging whether the light spots on the target image are matched with the guide indication; and when the judgment result is yes, determining the target image as a live shooting image. The embodiment judges whether the light spot (caused by illumination) on the target image is matched with the provided guide indication, and can better ensure that the acquired image is shot on site, thereby effectively improving the reliability of identity verification.
In one embodiment, the command response module 802 is further configured to: displaying a guide identifier, a guide path and/or a guide language; and/or, voice broadcasting the guide language.
In one embodiment, the command response module 802 is further configured to: displaying a sample image; setting a guide identifier at a designated position on the sample image; the determining module 804 is further configured to: judging whether the position of the light spot on the target image is matched with the specified position on the sample image or not; if so, it is determined that the spot matches the guidance indication.
In an embodiment, the determining module 804 is further configured to: acquiring a first relative position between the guide mark and a first reference point of the sample image and a second relative position between the light spot and a second reference point of the target image; wherein the position of the first reference point on the sample image is the same as the position of the second reference point on the target image; judging whether the difference between the first relative position and the second relative position is within a preset difference threshold value; if so, determining that the position of the light spot on the target image is matched with the specified position on the sample image.
In one embodiment, the command response module 802 is further configured to: displaying a guidance path in the designated area; the number of target images is multiple; the determining module 804 is further configured to: tracking the light spots on the multiple target images to obtain the motion tracks of the light spots; judging whether the motion track of the light spot is matched with a guide path in the designated area or not; if so, it is determined that the spot matches the guidance indication.
In one embodiment, the guiding words comprise position information of the light spots or motion track information of the light spots; the determining module 804 is further configured to: if the guide words contain the light spot position information, judging whether the position of the light spot on the target image is matched with the light spot position information; if yes, determining that the light spot is matched with the guiding indication; if the guide words contain the motion trail information of the light spots, tracking the light spots on the multiple target images to obtain the motion trail of the light spots; judging whether the motion trail of the light spot is matched with the motion trail information of the light spot; if so, it is determined that the spot matches the guidance indication.
In an embodiment, the determining module 804 is further configured to: and generating the motion trail of the light spot according to the position of the light spot in each target image and the acquisition time of each target image.
In one embodiment, the speckle on the target image is determined by: performing brightness detection on the target image to obtain the brightness value of each pixel point in the target image; determining a brightness area on the target image according to the position of the pixel point with the brightness value larger than the preset brightness threshold value; judging whether the radius of a circumscribed circle of the brightness area is within a preset radius interval or not; if so, the brightness area is determined as a light spot.
In one embodiment, the target object is a credential; the judging device for the live shooting image further comprises a recognition module (not shown in the figure) for performing character recognition on the target image by adopting an optical character recognition technology to obtain certificate information of the target object.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for the sake of brief description, reference may be made to corresponding contents in the foregoing embodiment.
Example four:
based on the foregoing embodiment, this embodiment provides a system for determining a live-shot image, including: the device comprises an image acquisition device, a light-emitting device, a processor and a storage device; the image acquisition device is used for acquiring a target image; a light emitting device for causing a light spot when irradiating a target object; the storage device has a computer program stored thereon, which, when executed by the processor, performs any of the methods provided in the second embodiment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Further, the present embodiment also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processing device, the computer program performs the steps of any one of the methods provided in the second embodiment.
The method, the device and the system for judging the field shot image provided by the embodiment of the invention comprise a computer readable storage medium storing a program code, wherein instructions included in the program code can be used for executing the method in the previous method embodiment, and specific implementation can be referred to the method embodiment, and is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (12)

1. A method for determining a live image, the method comprising:
responding to a shooting instruction, and providing a guide instruction; wherein the guide indication is used for enabling the light spot to reach a specified position or a specified motion track in cooperation with the guide indication;
judging whether the light spots on the target image are matched with the guide indication; the target image comprises a target object, and the light spot is caused when the target object is illuminated;
if yes, determining the target image as a live shooting image.
2. The method of claim 1, wherein the step of providing the guidance indication comprises:
displaying a guide identifier, a guide path and/or a guide language;
and/or the like, and/or,
and broadcasting the guide words through voice.
3. The method of claim 2, wherein the step of displaying the guide identifier comprises:
displaying a sample image;
setting a guide identifier at a designated position on the sample image;
the step of judging whether the light spot on the target image is matched with the guide indication comprises the following steps:
judging whether the position of the light spot on the target image is matched with the specified position on the sample image;
if so, determining that the spot matches the guidance indication.
4. The method according to claim 3, wherein the step of determining whether the position of the light spot on the target image matches the designated position on the sample image comprises:
acquiring a first relative position between the guide mark and a first reference point of the sample image and a second relative position between the light spot and a second reference point of the target image; wherein the position of the first reference point on the sample image is the same as the position of the second reference point on the target image;
judging whether the difference between the first relative position and the second relative position is within a preset difference threshold value;
and if so, determining that the position of the light spot on the target image is matched with the specified position on the sample image.
5. The method of claim 2, wherein the step of displaying the guide path comprises:
displaying a guidance path in the designated area;
the number of the target images is multiple; the step of judging whether the light spot on the target image is matched with the guide indication comprises:
tracking the light spots on the target images to obtain the motion tracks of the light spots;
judging whether the motion trail of the light spot is matched with a guide path in the designated area or not;
if so, determining that the spot matches the guidance indication.
6. The method according to claim 2, wherein the guide words comprise position information of the light spot or motion track information of the light spot;
the step of judging whether the light spot on the target image is matched with the guide indication comprises:
if the guide words contain the position information of the light spots, judging whether the positions of the light spots on the target image are matched with the position information of the light spots; if yes, determining that the light spot is matched with the guiding indication;
if the guide words contain the motion trail information of the light spots, tracking the light spots on the target images to obtain the motion trail of the light spots; judging whether the motion trail of the light spot is matched with the motion trail information of the light spot; if so, determining that the spot matches the guidance indication.
7. The method according to claim 5 or 6, wherein the step of tracking the light spots on the plurality of target images to obtain the motion tracks of the light spots comprises:
and generating the motion trail of the light spot according to the position of the light spot in each target image and the acquisition time of each target image.
8. The method of claim 1, wherein the spots on the target image are determined by:
performing brightness detection on the target image to obtain the brightness value of each pixel point in the target image;
determining a brightness area on the target image according to the position of the pixel point with the brightness value larger than a preset brightness threshold;
judging whether the radius of a circumscribed circle of the brightness area is within a preset radius interval or not;
and if so, determining the brightness area as the light spot.
9. The method of claim 1, wherein the target object is a document; the method further comprises the following steps:
and performing character recognition on the target image by adopting an optical character recognition technology to obtain certificate information of the target object.
10. A device for determining whether to take a live image, the device comprising:
the instruction response module is used for responding to the shooting instruction and providing a guide instruction; wherein the guide indication is used for enabling the light spot to reach a specified position or a specified motion track in cooperation with the guide indication;
the judging module is used for judging whether the light spots on the target image are matched with the guide indication; the target image comprises a target object, and the light spot is caused when the target object is illuminated;
and the scene shooting determining module is used for determining the target image as a scene shooting image when the judgment result of the judging module is yes.
11. A system for determining whether an image is to be captured in a scene, the system comprising: the device comprises an image acquisition device, a light-emitting device, a processor and a storage device;
the image acquisition device is used for acquiring a target image;
the light-emitting device is used for causing light spots when irradiating the target object;
the storage device has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of the preceding claims 1 to 9.
CN201910037990.9A 2019-01-15 2019-01-15 Method, device and system for judging field shooting image Active CN109618100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910037990.9A CN109618100B (en) 2019-01-15 2019-01-15 Method, device and system for judging field shooting image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910037990.9A CN109618100B (en) 2019-01-15 2019-01-15 Method, device and system for judging field shooting image

Publications (2)

Publication Number Publication Date
CN109618100A CN109618100A (en) 2019-04-12
CN109618100B true CN109618100B (en) 2020-11-27

Family

ID=66017572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910037990.9A Active CN109618100B (en) 2019-01-15 2019-01-15 Method, device and system for judging field shooting image

Country Status (1)

Country Link
CN (1) CN109618100B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415226A (en) * 2019-07-23 2019-11-05 Oppo广东移动通信有限公司 Measuring method, device, electronic equipment and the storage medium of stray light
CN110443237B (en) * 2019-08-06 2023-06-30 北京旷视科技有限公司 Certificate identification method, device, electronic equipment and computer readable storage medium
CN112995502B (en) * 2021-02-07 2023-04-07 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112906560A (en) * 2021-02-11 2021-06-04 河北鸟巢科技有限公司 Target picture, identification system and identification method
CN116320761A (en) * 2023-03-13 2023-06-23 北京城市网邻信息技术有限公司 Image acquisition method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067460A (en) * 2012-12-14 2013-04-24 厦门天聪智能软件有限公司 Corrective biology identification long distance identity checking method towards judicial community
CN107241306A (en) * 2017-01-06 2017-10-10 深圳市九州安域科技有限公司 A kind of man-machine recognition methods, service end, client and man-machine identifying system
CN108960165A (en) * 2018-07-11 2018-12-07 湖南城市学院 A kind of stadiums population surveillance method based on intelligent video identification technology
CN109409058A (en) * 2018-09-25 2019-03-01 中国平安人寿保险股份有限公司 Identity identifying method, device and computer equipment based on electronic signature

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100954640B1 (en) * 2002-02-05 2010-04-27 파나소닉 주식회사 Personal authentication method and device
JP4704185B2 (en) * 2005-10-27 2011-06-15 富士通株式会社 Biometric authentication system and biometric authentication method
JP5098973B2 (en) * 2008-11-27 2012-12-12 富士通株式会社 Biometric authentication device, biometric authentication method, and biometric authentication program
US9058519B2 (en) * 2012-12-17 2015-06-16 Qualcomm Incorporated System and method for passive live person verification using real-time eye reflection
KR102226177B1 (en) * 2014-09-24 2021-03-10 삼성전자주식회사 Method for executing user authentication and electronic device thereof
CN106156578B (en) * 2015-04-22 2020-02-14 深圳市腾讯计算机系统有限公司 Identity verification method and device
CN106022283A (en) * 2016-05-27 2016-10-12 北京中金国信科技有限公司 Biometric identification method, biometric identification device and identification equipment
US9940753B1 (en) * 2016-10-11 2018-04-10 Disney Enterprises, Inc. Real time surface augmentation using projected light
CN108629260B (en) * 2017-03-17 2022-02-08 北京旷视科技有限公司 Living body verification method and apparatus, and storage medium
CN108229408A (en) * 2018-01-11 2018-06-29 梁庆生 A kind of novel technology that authentication is carried out using recognition of face
CN108346132B (en) * 2018-02-01 2021-04-20 深圳市多图科技有限公司 Certificate imaging method and device, camera device and certificate imaging system
CN108616688A (en) * 2018-04-12 2018-10-02 Oppo广东移动通信有限公司 Image processing method, device and mobile terminal, storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067460A (en) * 2012-12-14 2013-04-24 厦门天聪智能软件有限公司 Corrective biology identification long distance identity checking method towards judicial community
CN107241306A (en) * 2017-01-06 2017-10-10 深圳市九州安域科技有限公司 A kind of man-machine recognition methods, service end, client and man-machine identifying system
CN108960165A (en) * 2018-07-11 2018-12-07 湖南城市学院 A kind of stadiums population surveillance method based on intelligent video identification technology
CN109409058A (en) * 2018-09-25 2019-03-01 中国平安人寿保险股份有限公司 Identity identifying method, device and computer equipment based on electronic signature

Also Published As

Publication number Publication date
CN109618100A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109618100B (en) Method, device and system for judging field shooting image
CN106599772B (en) Living body verification method and device and identity authentication method and device
US10699103B2 (en) Living body detecting method and apparatus, device and storage medium
US20200184059A1 (en) Face unlocking method and apparatus, and storage medium
US9721156B2 (en) Gift card recognition using a camera
CN112488064B (en) Face tracking method, system, terminal and storage medium
CN112651348B (en) Identity authentication method and device and storage medium
US9292739B1 (en) Automated recognition of text utilizing multiple images
CN101867755B (en) Information processing apparatus and information processing method
US20200279120A1 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
US20170124718A1 (en) Method, device, and computer-readable storage medium for area extraction
CN106228168A (en) The reflective detection method of card image and device
US11087137B2 (en) Methods and systems for identification and augmentation of video content
US11062136B2 (en) Pupil or iris tracking for liveness detection in authentication processes
US20230351809A1 (en) Iris authentication device, iris authentication method, and recording medium
AU2020309094B2 (en) Image processing method and apparatus, electronic device, and storage medium
US20150268728A1 (en) Systems and methods for notifying users of mismatches between intended and actual captured content during heads-up recording of video
US20180336320A1 (en) System and method for interacting with information posted in the media
CN110619656A (en) Face detection tracking method and device based on binocular camera and electronic equipment
CN111327888B (en) Camera control method and device, computer equipment and storage medium
EP4242978A1 (en) Methods and systems for authentication of a physical document
CN112597810A (en) Identity document authentication method and system
WO2020113020A1 (en) Providing content related to objects detected in images
US20230281820A1 (en) Methods and systems for authentication of a physical document
US20220318358A1 (en) Method and apparatus for continuous authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant