CN110648314B - Method, device and equipment for identifying flip image - Google Patents

Method, device and equipment for identifying flip image Download PDF

Info

Publication number
CN110648314B
CN110648314B CN201910838097.6A CN201910838097A CN110648314B CN 110648314 B CN110648314 B CN 110648314B CN 201910838097 A CN201910838097 A CN 201910838097A CN 110648314 B CN110648314 B CN 110648314B
Authority
CN
China
Prior art keywords
sensor data
image
motion sequence
terminal equipment
reproduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910838097.6A
Other languages
Chinese (zh)
Other versions
CN110648314A (en
Inventor
郭明宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910838097.6A priority Critical patent/CN110648314B/en
Publication of CN110648314A publication Critical patent/CN110648314A/en
Application granted granted Critical
Publication of CN110648314B publication Critical patent/CN110648314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the specification discloses a method, a device and equipment for identifying a flip image. The method comprises the following steps: firstly, training a flip recognition model to learn the difference of motion sequences of a preset time period before the moment of shooting a physical entity of a target object and an image of the target object by a terminal device; and then, collecting sensor data of a preset time period before the moment when the terminal equipment shoots the object to be verified, and extracting the motion sequence characteristics of the terminal equipment from the sensor data so as to predict whether the shot image of the object to be verified is a shot image or not by using the shot identification model.

Description

Method, device and equipment for identifying flip image
Technical Field
The present document relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for identifying a flipped image.
Background
Image reproduction is also called reproduction, and refers to the process of copying a document using photographic means. The conventional flip identification scheme is generally realized based on image features such as frames, reflection or moire information, but with the continuous development of photographing technology, the limitation of the conventional flip identification scheme is more and more obvious.
Thus, a more reliable solution is needed.
Disclosure of Invention
The embodiment of the specification provides a method for identifying a flipped image, so as to solve the problem of low flipped identification accuracy.
The embodiment of the specification also provides a method for identifying a flipped image, which comprises the following steps:
acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
processing the sensor data to obtain a motion sequence characteristic;
and inputting the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics of the photographed object and the reproduction labeling label of the object.
The embodiment of the specification also provides a device for identifying a flipped image, which comprises:
the acquisition module is used for acquiring sensor data of a preset time period before the moment when the terminal equipment shoots the object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
the processing module is used for processing the sensor data to obtain motion sequence characteristics;
the identification module inputs the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained based on the motion sequence characteristics when the target object is photographed and the reproduction labeling label training of the target object.
The embodiment of the present specification also provides an electronic device, which is characterized by including:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the steps of the method of identifying a flip image as described above.
The embodiments of the present specification also provide a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program when executed by a processor implements the steps of the method for identifying a flip image as described above.
The above-mentioned at least one technical scheme that this description embodiment adopted can reach following beneficial effect:
the sensor of the terminal equipment senses the motion sequence of the terminal equipment in a preset time period before the moment of shooting the physical entity of the target object and the image of the target object, and trains the flip identification model by utilizing the motion sequence characteristics of the motion sequence, so that the flip identification model learns the motion sequence characteristic difference corresponding to the flip and normal shooting of the terminal equipment for flip identification, thereby effectively improving the accuracy of identifying the flip image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
fig. 1 is a schematic view of a scene of capturing an image of a target object provided in the present specification;
fig. 2 is a schematic view of a scene of a physical entity of a shooting target object according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a method for identifying a flipped image according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for identifying a flipped image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The application scenario of the present specification is exemplified below.
Referring to fig. 1, a first application scenario includes: a terminal device 101, an image of a target object (denoted as object image 102) and a carrier 103, wherein:
the carrier 103 refers to a device that displays the object image 102;
a user can place the object image 102 in a view-finding frame and perform shooting operation by operating the terminal device 101 to obtain a shooting image of the object image 102; the terminal device 101 collects the motion sequence perceived by the sensor in this process and serves as a basis for identifying whether the captured image is a flip image.
Referring to fig. 2, a second application scenario includes: a terminal device 101 and a physical entity 102' of a target object, wherein:
the user can place the physical entity 102 'of the target object in the view-finding frame and perform shooting operation by operating the terminal device 101 to obtain a shooting image of the physical entity 102' of the target object; the terminal device 101 collects the motion sequence perceived by the sensor in this process and serves as a basis for identifying whether the captured image is a flip image.
The terminal device 101 is an image capturing device for capturing a user, and includes a mobile terminal having an image capturing function, for example: a mobile phone, a tablet computer, etc.; the carrier 103 may be a PC-side, for example: desktop computers, all-in-one machines, etc., may also be mobile terminals; the target object generally refers to an object that a service party requires to verify during service handling of a user, for example: credentials, faces, etc.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 3 is a flowchart of a method for identifying a flipped image provided in the present specification, referring to fig. 3, specifically may include the following steps:
step 302, acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment; the running sequence refers to time sequence motion signal data of a preset time period before the moment of shooting the object to be verified.
It should be noted that, for a scene that requires the user to perform the shooting operation, one implementation of step 302 may be:
and acquiring data perceived by a sensor of the terminal equipment in a first time period from a time point of starting a shooting interface to a time point of triggering shooting action, and taking the data as the sensor data. Specific examples may be:
firstly, a user inputs a first instruction for starting a shooting interface through the operation terminal equipment, and when the terminal equipment detects the first instruction, the shooting interface is started in response to the first instruction and starts to read motion data perceived by a sensor of the terminal equipment and perform continuous accumulation record; then, the user sets the object to be verified in a view-finding frame of a shooting interface by operating the terminal equipment, and inputs a second instruction for triggering shooting action, and the terminal equipment responds to the second instruction to shoot to obtain a shooting image of the object to be verified and stops reading data perceived by a sensor; and finally, the terminal equipment collates the data accumulated and recorded in the time period to obtain the sensor data.
Example 2, a terminal device records a first time point at which a first instruction to start a photographing interface and a second time point at which a second instruction to trigger a photographing action is acquired, and extracts sensor data between the first time point and the second time point from sensor data recorded by a sensor when photographing is completed.
The first time period may be any time period between the time point of starting the shooting interface and the time point of triggering the shooting action, where the length of the any time period is greater than a minimum shooting duration threshold, and the minimum shooting duration threshold may be an average value of time lengths between a first time point of acquiring the first instruction and a second time point of acquiring the second instruction by the pre-counted terminal device.
It should be noted that, for a scene that does not require the user to perform the shooting operation, another implementation manner of step 302 may be:
and acquiring data perceived by a sensor of the terminal equipment in a second time period between the time point of opening the scanning interface and the time point of finishing scanning, and taking the data as the sensor data. Specific examples may be:
example 3, firstly, a user inputs a first instruction for starting a shooting interface through the operation terminal device, and when the terminal device detects the first instruction, the shooting interface is started in response to the first instruction and starts to read motion data perceived by a sensor of the terminal device and perform continuous accumulation record; then, the user puts the object to be verified in a scanning interface through operating the terminal equipment, so that the terminal equipment can automatically scan, and when the terminal equipment detects that the scanning is completed, the terminal equipment stops reading the data perceived by the sensor; and finally, the terminal equipment collates the data accumulated and recorded in the time period to obtain the sensor data.
Example 4, a terminal device records a first time point when a first instruction for starting a shooting interface is acquired and a second time point when the terminal device finishes scanning, and reads data perceived by a motion sensor in a second time period between the first time point and the second time point when the scanning is finished, so as to obtain the sensor data.
The second time period may be any time period between the time point of opening the scanning interface and the time point of completing scanning, where the length of the any time period must not be smaller than a minimum scanning duration threshold, and the minimum scanning duration threshold may be an average value of time lengths between a first time point of acquiring a first instruction of opening the shooting interface and a second time point of completing scanning by the pre-counted terminal device.
For examples 1 to 4, the first instruction for opening the shooting interface may refer to an instruction corresponding to an operation of clicking the camera, clicking the 'swipe' control, and the like by the user; the second instruction for triggering the photographing action may refer to an instruction corresponding to an operation of clicking a photographing button at the photographing interface by the user.
Based on the motion data, the embodiment of the specification senses the motion data of the terminal equipment in the process of shooting the object to be verified by utilizing a sensor arranged in the terminal equipment, so that an accurate motion sequence of the terminal equipment can be obtained, and data support is provided for follow-up flip recognition; further, the sensor data of a partial period between the first time point and the second time point can be selectively acquired without acquiring the full amount of sensor data, and therefore, the amount of data required to be processed by the terminal device can be reduced while ensuring the accuracy of the reproduction identification.
In addition, in order to avoid mutual interference of sensor data corresponding to different shooting processes and to improve the accuracy of the sensor data acquired in each shooting process; upon detecting the first instruction, the method further comprises: the initializing step may specifically include:
initializing the sensor when detecting an instruction for starting a shooting interface; or when the command for starting the shooting interface is detected, initializing a camera of the terminal equipment.
Further, since the difference of the motion sequence characteristics corresponding to the physical entity (recorded as normal shooting) of the object to be verified and the image (recorded as flip shooting) of the object to be verified by the terminal device may be represented in a plurality of motion dimensions, including: direction dimension, speed dimension, etc., and thus the sensor data in step 302 described above may be a combination of motion data for a plurality of motion dimensions, including but not limited to one or more of direction sensor data for characterizing a sequence of motion directions of the terminal device and speed sensor data for characterizing a sequence of motion speeds of the terminal device.
Accordingly, the sensor in the step 302 is preferably a motion sensor built in the terminal device; the motion sensor is used for sensing the motion condition of an object and feeding back information to the computer to perform corresponding operation; the type of motion sensor corresponds to the dimension of the motion data required, for example: an angular velocity sensor corresponding to the direction dimension, an acceleration sensor corresponding to the acceleration dimension, and the like.
Based on the above, in the embodiment of the specification, according to the motion sensor built in the terminal equipment, a plurality of types of motion sequences of the terminal equipment are perceived, so that a plurality of types of high-precision motion sequence characteristics can be provided for the identification of the overturn, and data support is provided for improving the identification precision of the overturn; moreover, the built-in motion sensors are all existing sensors of the terminal equipment, and other additional sensors are not required to be configured, so that the method has the advantage of strong universality.
Step 304, processing the sensor data to obtain a motion sequence characteristic;
the motion sequence features refer to feature variables of preset feature dimensions corresponding to the motion sequence of the terminal equipment; the preset feature dimension is generated based on the motion sequence difference corresponding to the terminal device in the process of performing the flipping and the normal shooting, referring to fig. 1 and fig. 2, and may be specifically exemplified as follows:
in example 1, the angle between the screen of the terminal device and the horizontal line is generally close to 90 degrees during the roll, and the angle between the screen of the terminal device and the horizontal line is generally close to 0 degrees during the normal roll, so that the angle dimension can be preset.
Example 2 the flip direction of the terminal device at the time of a flip is generally flipped up from a horizontal angle to a vertical angle, and the flip direction of the terminal device at the time of a normal flip is generally flipped down from a horizontal angle to a vertical angle, whereby the flip direction dimension can be set.
It should be noted that, one implementation of step 304 may be:
quantizing the sensor data to obtain a motion sequence track; extracting the motion characteristics of the preset characteristic dimension from the motion sequence track, for example: the angle of the terminal equipment during shooting, the overturning direction of the terminal equipment before shooting, and the like, and are taken as motion sequence characteristics.
Based on the above, in the embodiment of the specification, a plurality of feature dimensions are set by comparing the motion difference of the terminal equipment during the process of turning and normal shooting, and corresponding motion features are extracted, so that the turning recognition model learns the motion feature difference of the terminal equipment during the process of turning and normal shooting, and the turning recognition precision of the turning recognition model can be effectively improved.
Step 306, inputting the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image;
the method comprises the steps that a target object is shot, a flip identification model is obtained through training based on motion sequence characteristics of the target object and a flip label of the target object, and the flip label is used for indicating whether an image obtained by shooting the target object is a flip image or not.
It should be noted that, if the image of the object to be verified is predicted to be a flip image, a message that the verification of the object to be verified fails is fed back. If the image of the object to be verified is predicted to be a non-reproduction image, the object to be verified is further verified, and a message that verification fails or passes is fed back based on a verification result.
It will be appreciated that prior to step 306, the method further comprises: training the flap recognition model, wherein the training of the flap recognition model specifically includes:
firstly, acquiring sample data and a corresponding tag, wherein the sample data comprises a motion sequence feature when a target object is shot, the tag is used for indicating whether an image obtained by shooting the target object is a flip image, the motion sequence feature is a feature of a motion sequence corresponding to sensor data of a preset time period before the moment of shooting the target object, and the feature at least comprises the feature of the preset feature dimension corresponding to the motion sequence; then, based on the sample data and the corresponding labels, a tap recognition model is trained, which may be a classification model based on a machine learning algorithm such as a lead algorithm, bayes, decision trees, and the like.
Further, to improve the accuracy of the reproduction identification, the method further includes: the step of secondary reproduction identification specifically comprises the following steps:
acquiring image data of a shot image of the object to be verified; based on the image data, analyzing information such as frames, feedback or mole patterns of the image of the object to be verified, and obtaining a secondary prediction result; and fusing the secondary prediction result with the prediction result (marked as a primary prediction result) of the flap recognition model to obtain a final flap recognition result. The fusion scheme of the primary prediction result and the secondary prediction result can be exemplified as follows:
if the primary prediction result and the secondary prediction result are consistent, for example: are both the flipped image/the non-flipped image, and the final flipped identification result is the flipped image/the non-flipped image;
if the primary prediction result is inconsistent with the secondary prediction result, the reproduction identification model is used for predicting again, and the predicted result is used as the right result. Synchronously, recording the inconsistent times of the predicted results, and taking the inconsistent times of the predicted results as the evaluation index of the flap recognition model to trigger the iterative optimization of the flap recognition model.
Based on the above, in the embodiment of the specification, the difference of the motion characteristics of the terminal equipment during the process of the overturn and the normal overturn is learned by using the classification model so as to intelligently identify the motion sequence characteristics of the terminal equipment in a preset time period before the moment of shooting the object to be verified, so that the overturn identification result is predicted, and the embodiment of the specification has the advantage of high overturn identification accuracy; in addition, the embodiment of the specification also introduces a traditional flip identification scheme based on image characteristics such as frames, feedback or mole patterns, and optimizes the flip identification model and the prediction result thereof by taking the prediction result as a reference, so that the prediction efficiency of the flip identification model can be further improved.
In summary, in the embodiment of the present disclosure, the sensor of the terminal device senses a motion sequence of the terminal device within a predetermined period of time before a moment of capturing an image of a physical entity of a target object and the target object, and trains the flip identification model by using the motion sequence features thereof, so that the flip identification model learns a motion sequence feature difference corresponding to a flip and a normal flip performed by the terminal device, and is used for performing flip identification, thereby effectively improving accuracy of identifying the flip image.
In addition, for simplicity of explanation, the above-described method embodiments are depicted as a series of acts, but it should be understood by those skilled in the art that the present embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently in accordance with the embodiments herein. Further, those skilled in the art will recognize that the embodiments described in the specification are all preferred embodiments, and that the actions involved are not necessarily required for the embodiments of the present specification.
Fig. 4 is a schematic structural diagram of an apparatus for identifying a flipped image according to an embodiment of the present disclosure, referring to fig. 4, the apparatus may specifically include: an acquisition module 402, a processing module 404, and an identification module 406, wherein:
an acquisition module 402, configured to acquire sensor data of a predetermined period of time before a moment when an object to be verified is photographed by a terminal device, where the sensor data is used to characterize a motion sequence of the terminal device;
a processing module 404, configured to process the motion sequence feature based on the sensor data;
the identification module 406 inputs the motion sequence feature into a flip identification model to predict whether the photographed image of the object to be verified is a flip image, wherein the flip identification model is obtained based on the motion sequence feature when the target object is photographed and the flip label tag training of the target object.
Optionally, the acquiring module 402 includes:
the first acquisition unit acquires data perceived by a sensor of the terminal equipment in a first time period from a time point of starting a shooting interface to a time point of triggering shooting action as the sensor data.
Optionally, the first period of time is any period of time between the point of time when the shooting interface is turned on and the point of time when the shooting action is triggered.
Optionally, the acquiring module 402 includes:
and a second acquisition unit for acquiring data perceived by the sensor of the terminal equipment in a second time period between the time point of opening the scanning interface and the time point of finishing scanning, and taking the data as the sensor data.
Optionally, the second period of time is any period of time between the time point when the scanning interface is turned on and the time point when the scanning is completed.
Optionally, the apparatus further comprises:
and the initialization module initializes the sensor when detecting an instruction for starting a shooting interface.
Optionally, the sensor data includes: and direction sensor data, wherein the direction sensor data is used for representing a motion direction sequence of the terminal equipment.
Optionally, the sensor data further includes: speed sensor data for characterizing a sequence of movement speeds of the terminal device.
Optionally, the apparatus further comprises:
and the feedback module is used for feeding back a message of failure in verification of the object to be verified if the image of the object to be verified is predicted to be a flip image.
Therefore, in the embodiment of the specification, the sensor of the terminal equipment senses the motion sequence of the terminal equipment within a preset time period before the moment of shooting the physical entity of the target object and the image of the target object, and trains the flip identification model by utilizing the motion sequence characteristics of the motion sequence, so that the flip identification model learns the motion sequence characteristic difference corresponding to the flip and normal-shooting of the terminal equipment for flip identification, and the accuracy of identifying the flip image is effectively improved.
In addition, for the above-described apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference should be made to the description of the method embodiments for relevant points. Further, it should be noted that, among the respective components of the apparatus of the present specification, the components thereof are logically divided according to functions to be realized, but the present specification is not limited thereto, and the respective components may be re-divided or combined as necessary.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and referring to fig. 5, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile memory, and may include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form a device for identifying the flipped image on a logic level. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
The network interface, processor and memory may be interconnected by a bus system. The bus may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 5, but not only one bus or type of bus.
The memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include read only memory and random access memory and provide instructions and data to the processor. The Memory may comprise a Random-Access Memory (RAM) or may further comprise a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory.
The processor is used for executing the program stored in the memory and specifically executing:
acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
processing the sensor data to obtain a motion sequence characteristic;
and inputting the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics of the photographed object and the reproduction labeling label of the object.
The method performed by the apparatus for recognizing a flip image or the manager (Master) node disclosed in the embodiment shown in fig. 4 of the present specification may be applied to a processor or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of this specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The means for identifying the flipped image may also perform the method of fig. 3 and implement the method performed by the manager node.
Based on the same inventive concept, the present embodiments also provide a computer-readable storage medium storing one or more programs, which when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of recognizing a flip image provided by the corresponding embodiment of fig. 3.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (12)

1. A method of identifying a flip image, comprising:
acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
processing the sensor data to obtain a motion sequence characteristic;
and inputting the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics of the photographed object and the reproduction labeling label of the object.
2. The method according to claim 1, the acquiring sensor data of a predetermined period of time before a moment when the terminal device photographs the object to be authenticated, comprising:
and acquiring data perceived by a sensor of the terminal equipment in a first time period from a time point of starting a shooting interface to a time point of triggering shooting action, and taking the data as the sensor data.
3. The method of claim 2, the first time period being any time period between a point in time at which the capture interface is turned on and a point in time at which the capture action is triggered.
4. The method according to claim 1, the acquiring sensor data of a predetermined period of time before a moment when the terminal device photographs the object to be authenticated, comprising:
and acquiring data perceived by a sensor of the terminal equipment in a second time period between the time point of opening the scanning interface and the time point of finishing scanning, and taking the data as the sensor data.
5. The method of claim 4, wherein the second time period is any time period between a point in time when the scan interface is turned on and a point in time when the scan is completed.
6. The method of claim 2 or 4, further comprising:
and initializing the sensor when detecting an instruction for starting a shooting interface.
7. The method of claim 1, the sensor data comprising: and direction sensor data, wherein the direction sensor data is used for representing a motion direction sequence of the terminal equipment.
8. The method of claim 7, the sensor data further comprising: speed sensor data for characterizing a sequence of movement speeds of the terminal device.
9. The method of claim 1, further comprising:
and if the image of the object to be verified is predicted to be a flip image, feeding back a message that the verification of the object to be verified fails.
10. An apparatus for identifying a flip image, comprising:
the acquisition module is used for acquiring sensor data of a preset time period before the moment when the terminal equipment shoots the object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
the processing module is used for processing the sensor data to obtain motion sequence characteristics;
the identification module inputs the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained based on the motion sequence characteristics when the target object is photographed and the reproduction labeling label training of the target object.
11. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
processing the sensor data to obtain a motion sequence characteristic;
and inputting the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics of the photographed object and the reproduction labeling label of the object.
12. A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the operations of:
acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, wherein the sensor data are used for representing a motion sequence of the terminal equipment;
processing the sensor data to obtain a motion sequence characteristic;
and inputting the motion sequence characteristics into a reproduction identification model to predict whether the photographed image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics of the photographed object and the reproduction labeling label of the object.
CN201910838097.6A 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image Active CN110648314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838097.6A CN110648314B (en) 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838097.6A CN110648314B (en) 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image

Publications (2)

Publication Number Publication Date
CN110648314A CN110648314A (en) 2020-01-03
CN110648314B true CN110648314B (en) 2023-08-04

Family

ID=69010067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838097.6A Active CN110648314B (en) 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image

Country Status (1)

Country Link
CN (1) CN110648314B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215232B (en) * 2020-10-10 2023-10-24 平安科技(深圳)有限公司 Certificate verification method, device, equipment and storage medium
CN113222952B (en) * 2021-05-20 2022-05-24 蚂蚁胜信(上海)信息技术有限公司 Method and device for identifying copied image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710221A (en) * 2015-06-12 2018-02-16 北京释码大华科技有限公司 A kind of method, apparatus and mobile terminal for being used to detect live subject
CN108875688A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 A kind of biopsy method, device, system and storage medium
CN109635539A (en) * 2018-10-30 2019-04-16 华为技术有限公司 A kind of face identification method and electronic equipment
CN110023946A (en) * 2016-07-05 2019-07-16 吴业成 Spoofing attack detection during picture catching at the scene

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292269B (en) * 2017-06-23 2020-02-28 中国科学院自动化研究所 Face image false distinguishing method based on perspective distortion characteristic, storage and processing equipment
CN109325933B (en) * 2017-07-28 2022-06-21 阿里巴巴集团控股有限公司 Method and device for recognizing copied image
CN109815960A (en) * 2018-12-21 2019-05-28 深圳壹账通智能科技有限公司 Reproduction image-recognizing method, device, equipment and medium based on deep learning
CN110046644B (en) * 2019-02-26 2023-04-07 创新先进技术有限公司 Certificate anti-counterfeiting method and device, computing equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710221A (en) * 2015-06-12 2018-02-16 北京释码大华科技有限公司 A kind of method, apparatus and mobile terminal for being used to detect live subject
CN110023946A (en) * 2016-07-05 2019-07-16 吴业成 Spoofing attack detection during picture catching at the scene
CN108875688A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 A kind of biopsy method, device, system and storage medium
CN109635539A (en) * 2018-10-30 2019-04-16 华为技术有限公司 A kind of face identification method and electronic equipment

Also Published As

Publication number Publication date
CN110648314A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN109190539B (en) Face recognition method and device
CN109409277B (en) Gesture recognition method and device, intelligent terminal and computer storage medium
CN106650662B (en) Target object shielding detection method and device
CN110705532B (en) Method, device and equipment for identifying copied image
CN113095124A (en) Face living body detection method and device and electronic equipment
CN102096805B (en) Apparatus and method for registering plurality of facial images for face recognition
CN110688939B (en) Method, system and equipment for verifying certificate image to be identified
CN109727275B (en) Object detection method, device, system and computer readable storage medium
CN111368944B (en) Method and device for recognizing copied image and certificate photo and training model and electronic equipment
CN101477616B (en) Human face detecting and tracking process
CN101300830A (en) System and method for implementing stability of a plurality of pick-up images driven by movement
CN110648314B (en) Method, device and equipment for identifying flip image
CN112672145B (en) Camera shooting function detection method and device
CN112333356B (en) Certificate image acquisition method, device and equipment
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
US20170032172A1 (en) Electronic device and method for splicing images of electronic device
WO2020114105A1 (en) Comparison method based on multiple facial images, apparatus and electronic device
CN110263805B (en) Certificate verification and identity verification method, device and equipment
CN109102026B (en) Vehicle image detection method, device and system
CN113780212A (en) User identity verification method, device, equipment and storage medium
CN111432134A (en) Method and device for determining exposure time of image acquisition equipment and processor
CN112052702A (en) Method and device for identifying two-dimensional code
CN111523402B (en) Video processing method, mobile terminal and readable storage medium
CN110223320B (en) Object detection tracking method and detection tracking device
CN112418303A (en) Training method and device for recognizing state model and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant