CN110648314A - Method, device and equipment for identifying copied image - Google Patents

Method, device and equipment for identifying copied image Download PDF

Info

Publication number
CN110648314A
CN110648314A CN201910838097.6A CN201910838097A CN110648314A CN 110648314 A CN110648314 A CN 110648314A CN 201910838097 A CN201910838097 A CN 201910838097A CN 110648314 A CN110648314 A CN 110648314A
Authority
CN
China
Prior art keywords
sensor data
reproduction
motion sequence
image
verified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910838097.6A
Other languages
Chinese (zh)
Other versions
CN110648314B (en
Inventor
郭明宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910838097.6A priority Critical patent/CN110648314B/en
Publication of CN110648314A publication Critical patent/CN110648314A/en
Application granted granted Critical
Publication of CN110648314B publication Critical patent/CN110648314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the specification discloses a method, a device and equipment for recognizing a copied image. The method comprises the following steps: firstly, training a reproduction recognition model to learn the difference of motion sequences of terminal equipment in a preset time period before the moment of shooting the physical entity of a target object and the image of the target object; and then, acquiring sensor data of a preset time period before the moment that the terminal equipment shoots the object to be verified, and extracting motion sequence characteristics of the terminal equipment from the sensor data so as to predict whether the shot image of the object to be verified is a copied image or not by using the copying recognition model.

Description

Method, device and equipment for identifying copied image
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for recognizing a copied image.
Background
The image reproduction is also called a photo reproduction, and refers to a process of copying a file by a photographic method. The current reproduction identification scheme is generally realized based on image characteristics such as frame, reflection or moire information, but with the continuous development of the photographing technology, the limitation of the traditional reproduction identification scheme is more and more obvious.
Therefore, a more reliable solution is needed.
Disclosure of Invention
The embodiment of the specification provides a method for recognizing a copied image, so as to solve the problem of low accuracy of copying recognition.
An embodiment of the present specification further provides a method for recognizing a copied image, including:
acquiring sensor data of a preset time period before the moment when a terminal device shoots an object to be verified, wherein the sensor data is used for representing a motion sequence of the terminal device;
processing the sensor data to obtain a motion sequence feature;
inputting the motion sequence characteristics into a reproduction identification model to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics when the target object is shot and a reproduction labeling label of the target object.
An embodiment of the present specification further provides an apparatus for recognizing a copied image, including:
the device comprises an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, and the sensor data is used for representing a motion sequence of the terminal equipment;
the processing module is used for processing the sensor data to obtain motion sequence characteristics;
and the recognition module is used for inputting the motion sequence characteristics into a reproduction recognition model so as to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction recognition model is obtained based on motion sequence characteristics when the target object is shot and reproduction labeling labels of the target object.
An embodiment of the present specification further provides an electronic device, which includes:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the steps of the method of identifying a copied image as described above.
The present specification further provides a computer-readable storage medium, wherein the computer-readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the method for recognizing a copied image as described above.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the motion sequence of the terminal equipment in a preset time period before the moment of shooting the physical entity of the target object and the image of the target object is sensed by the sensor of the terminal equipment, and the motion sequence characteristics are utilized to train the copying recognition model, so that the copying recognition model learns the motion sequence characteristic difference corresponding to copying and normal copying of the terminal equipment, the copying recognition is carried out, and the accuracy of recognizing the copied image is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a scene schematic diagram of capturing an image of a target object provided in the present specification;
fig. 2 is a scene schematic diagram of a physical entity for shooting a target object according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a method for recognizing a copied image according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for recognizing a copied image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person skilled in the art without making any inventive step based on the embodiments in this description belong to the protection scope of this document.
An application scenario of the present specification is exemplified below.
Referring to fig. 1, a first application scenario includes: a terminal device 101, an image of a target object (denoted as object image 102), and a carrier 103, wherein:
the carrier 103 refers to a device that displays the object image 102;
a user can place the object image 102 in a view finder by operating the terminal device 101 and perform shooting operation to obtain a shot image of the object image 102; the terminal device 101 collects the motion sequence sensed by the sensor in the process and uses the motion sequence as a basis for identifying whether the shot image is a copied image.
Referring to fig. 2, the second application scenario includes: a physical entity 102' of the terminal device 101 and the target object, wherein:
a user can place the physical entity 102 'of the target object in the view finder through operating the terminal device 101 and perform shooting operation to obtain a shot image of the physical entity 102' of the target object; the terminal device 101 collects the motion sequence sensed by the sensor in the process and uses the motion sequence as a basis for identifying whether the shot image is a copied image.
The terminal device 101 refers to an image capturing device for shooting by a user, and includes a mobile terminal having an image capturing function, for example: mobile phones, tablet computers, and the like; the carrier 103 may be a PC terminal, for example: desktop computers, all-in-one machines and the like, and can also be mobile terminals; the target object generally refers to an object which a business party requires to be verified in the business handling process of a user, for example: credentials, faces, etc.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 3 is a schematic flowchart of a method for recognizing a copied image provided in this specification, and referring to fig. 3, the method may specifically include the following steps:
step 302, obtaining sensor data of a preset time period before the moment when a terminal device shoots an object to be verified, wherein the sensor data is used for representing a motion sequence of the terminal device; the running sequence refers to time sequence motion signal data of a preset time period before the moment when the terminal device shoots the object to be verified.
It should be noted that, for a scene that requires a user to perform a shooting operation, one implementation manner of step 302 may be:
and acquiring data sensed by a sensor of the terminal equipment in a first time period from the time point of starting the shooting interface to the time point of triggering the shooting action, and taking the data as the sensor data. Specific examples can be:
example 1, firstly, a user inputs a first instruction for starting a shooting interface through the operation terminal device, and when the terminal device detects the first instruction, the terminal device starts the shooting interface in response to the first instruction and starts to read motion data sensed by a sensor of the terminal device and continuously accumulates and records the motion data; then, a user places an object to be verified in a view-finding frame of a shooting interface by operating the terminal equipment and inputs a second instruction for triggering a shooting action, and the terminal equipment responds to the second instruction to shoot to obtain a shot image of the object to be verified and stops reading data sensed by the sensor; and finally, the terminal equipment sorts the data accumulated and recorded in the time period to obtain the sensor data.
Example 2, the terminal device records a first time point at which a first instruction to start a shooting interface is acquired and a second time point at which a second instruction to trigger a shooting action is acquired, and extracts sensor data between the first time point and the second time point from sensor data recorded by a sensor when shooting is completed.
The first time period may be any time period between the time point of starting the shooting interface and the time point of triggering the shooting action, the length of the any time period is greater than a minimum shooting duration threshold, and the minimum shooting duration threshold may be an average value of time lengths between a first time point when the terminal device obtains the first instruction and a second time point when the terminal device obtains the second instruction, which are counted in advance.
It should be noted that, for a scenario in which the user is not required to perform the shooting operation, another implementation manner of step 302 may be:
and acquiring data sensed by a sensor of the terminal equipment in a second time period between the time point of starting the scanning interface and the time point of finishing scanning, and taking the data as the sensor data. Specific examples can be:
example 3, first, a user inputs a first instruction for starting a shooting interface through the operation terminal device, and when the terminal device detects the first instruction, the terminal device starts the shooting interface in response to the first instruction and starts to read motion data sensed by a sensor of the terminal device and continuously accumulates and records the motion data; then, a user places an object to be verified in a scanning interface by operating the terminal equipment for the terminal equipment to perform automatic scanning processing, and when the terminal equipment detects that scanning is finished, the terminal equipment stops reading data sensed by the sensor; and finally, the terminal equipment sorts the data accumulated and recorded in the time period to obtain the sensor data.
Example 4, the terminal device records a first time point at which a first instruction for starting a shooting interface is acquired and a second time point at which the terminal device completes scanning, and reads data sensed by the motion sensor in a second time period between the first time point and the second time point when the scanning is completed, so as to obtain the sensor data.
The second time period may be any time period between the time point of starting the scanning interface and the time point of completing the scanning, the length of the any time period is not less than a minimum scanning duration threshold, and the minimum scanning duration threshold may be an average value of time lengths between a first time point when the terminal device obtains the first instruction of starting the shooting interface and a second time point when the scanning is completed.
For examples 1 to 4, the first instruction for opening the shooting interface may be an instruction corresponding to an operation of clicking a camera, clicking a 'scanning' control, and the like by a user; the second instruction for triggering the shooting action may be an instruction corresponding to an operation of clicking a shooting button on the shooting interface by the user.
Based on this, in the embodiment of the specification, the motion data of the terminal device in the process of shooting the object to be verified is sensed by using the sensor arranged in the terminal device, so that an accurate motion sequence of the terminal device can be obtained, and data support is provided for subsequent copying and identification; moreover, it is also possible to selectively collect sensor data for a partial period of time between the first point in time and the second point in time without collecting the full amount of sensor data, and therefore, it is possible to reduce the amount of data that the terminal device needs to process while ensuring the reproduction recognition accuracy.
In addition, the mutual interference of the sensor data corresponding to different shooting processes is avoided, and the precision of the sensor data acquired in each shooting process is improved; upon detecting the first instruction, the method further comprises: the initialization step may specifically include:
when an instruction for starting a shooting interface is detected, initializing the sensor; or when detecting an instruction of starting the shooting interface, initializing a camera of the terminal equipment.
Further, the difference of the motion sequence characteristics corresponding to the physical entity (denoted as normal shooting) of the object to be verified and the image (denoted as reproduction) of the object to be verified may be embodied in multiple motion dimensions, including: direction dimension, speed dimension, etc., therefore, the sensor data in step 302 above may be a combination of motion data for a plurality of motion dimensions, including but not limited to one or more of direction sensor data and speed sensor data, wherein the direction sensor data is used to characterize a sequence of motion directions of the terminal device, and the speed sensor data is used to characterize a sequence of motion speeds of the terminal device.
Correspondingly, the sensor in step 302 is preferably a motion sensor built in the terminal device; the motion sensor is a sensor which senses the motion condition of an object and feeds information back to the computer for corresponding operation; the kind of motion sensor corresponds to the dimension of the required motion data, for example: an angular velocity sensor corresponding to the direction dimension, an acceleration sensor corresponding to the acceleration dimension, and the like.
Based on this, the embodiment of the present specification senses multiple types of motion sequences of the terminal device according to the motion sensor built in the terminal device, and can further provide multiple types of high-precision motion sequence features for the reproduction identification, and provide data support for improving the reproduction identification precision; moreover, the built-in motion sensors are all existing sensors of the terminal equipment, other additional sensors are not needed to be configured, and the method has the advantage of strong universality.
Step 304, processing the sensor data to obtain a motion sequence characteristic;
the motion sequence feature refers to a feature variable of a preset feature dimension corresponding to the motion sequence of the terminal device; the preset feature dimension is generated based on a difference between motion sequences corresponding to the terminal device during the copying and the normal copying, which is shown in fig. 1 and fig. 2, and specific examples thereof may be as follows:
example 1, an included angle between the screen of the terminal device and a horizontal line is generally close to 90 degrees during the reproduction, and an included angle between the screen of the terminal device and the horizontal line is generally close to 0 degree during the normal reproduction, so that an angle dimension can be preset.
Example 2, terminal equipment's upset direction is generally upwards overturn to the vertical angle by horizontal angle during the reproduction, and terminal equipment's upset direction is generally downwards overturn to the vertical angle by horizontal angle during normal reproduction, can set up the upset direction dimension from this.
It should be noted that, one implementation manner of step 304 may be:
quantizing the sensor data to obtain a motion sequence track; extracting the motion features of the preset feature dimension from the motion sequence trajectory, for example: the angle of the terminal device at the time of shooting, the turning direction of the terminal device before shooting, and the like, as motion sequence characteristics.
Based on this, this specification embodiment sets up a plurality of characteristic dimensions and extracts corresponding motion characteristic through comparing the motion difference of terminal equipment when reprinting and normally clapping to the motion characteristic difference of terminal equipment when reprinting and normally clapping is learnt to the reproduction identification model, thereby can effectively improve reproduction identification model's reproduction identification precision.
Step 306, inputting the motion sequence characteristics into a copying recognition model to predict whether the shot image of the object to be verified is a copied image;
the copying recognition model is obtained by training based on motion sequence characteristics when the target object is shot and a copying label of the target object, and the copying label is used for indicating whether an image obtained by shooting the target object is a copied image or not.
It should be noted that, if the image of the object to be verified is predicted to be a copied image, a message indicating that the verification of the object to be verified fails is fed back. And if the image of the object to be verified is predicted to be a non-reproduction image, further verifying the object to be verified, and feeding back a message of verification failure or verification passing based on a verification result.
It will be appreciated that prior to step 306, the method further comprises: training a reproduction recognition model, wherein the step of training the reproduction recognition model specifically can be as follows:
firstly, acquiring sample data and a corresponding label, wherein the sample data comprises a motion sequence characteristic when a target object is shot, the label is used for indicating whether an image obtained by shooting the target object is a copied image, and the motion sequence characteristic is a characteristic of a motion sequence corresponding to sensor data in a preset time period before the moment of shooting the target object and at least comprises a characteristic of the preset characteristic dimension corresponding to the motion sequence; then, training a reproduction recognition model based on the sample data and the corresponding label, wherein the reproduction recognition model can be a classification model based on machine learning algorithms such as a lead algorithm, Bayes, a decision tree and the like.
Further, in order to improve the reproduction identification precision, the method further comprises the following steps: and identifying the secondary reproduction, wherein the steps can be specifically as follows:
acquiring image data of the shot image of the object to be verified; analyzing information such as a frame, feedback or moire of the image of the object to be verified based on the image data to obtain a secondary prediction result; and fusing the secondary prediction result and the prediction result (recorded as a primary prediction result) of the copying recognition model to obtain a final copying recognition result. The fusion scheme of the primary prediction result and the secondary prediction result may be exemplified as follows:
if the primary prediction result is consistent with the prediction result of the secondary prediction result, for example: if the images are both the copied images or the non-copied images, the final copied identification result is the copied images or the non-copied images;
and if the primary prediction result is inconsistent with the prediction result of the secondary prediction result, using the copying recognition model for predicting again, and taking the result of predicting again as the standard. And synchronously, recording the times of inconsistent prediction results, and taking the times of inconsistent prediction results as evaluation indexes of the reproduction identification model to trigger the iterative optimization of the reproduction identification model.
Based on this, the embodiment of the specification learns the motion characteristic difference of the terminal device during the copying and the normal copying by using the classification model, so as to intelligently identify the motion sequence characteristics of the terminal device in a preset time period before the moment when the terminal device shoots the object to be verified, and further predict the copying identification result, and has the advantage of high copying identification accuracy; in addition, the embodiment of the present disclosure further introduces a traditional reproduction identification scheme based on image features such as frames, feedback, moire fringes, or the like, and optimizes the reproduction identification model and the prediction result thereof by using the prediction result as a reference, so as to further improve the prediction efficiency of the reproduction identification model.
In summary, in the embodiment of the present specification, a sensor of a terminal device senses a motion sequence of the terminal device in a predetermined time period before a time when an image of a target object and a physical entity of the target object is captured, and a motion sequence feature of the motion sequence is utilized to train a reproduction recognition model, so that the reproduction recognition model learns a motion sequence feature difference corresponding to reproduction and normal reproduction of the terminal device for reproduction recognition, thereby effectively improving accuracy of recognizing the reproduced image.
In addition, for the sake of simplicity, the above method embodiments are described as a series of acts or combinations, but it should be understood by those skilled in the art that the method embodiments are not limited by the described acts or sequences, as some steps may be performed in other sequences or simultaneously according to the method embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the described embodiments.
Fig. 4 is a schematic structural diagram of an apparatus for recognizing a copied image according to an embodiment of the present disclosure, and referring to fig. 4, the apparatus may specifically include: an acquisition module 402, a processing module 404, and an identification module 406, wherein:
an obtaining module 402, configured to obtain sensor data of a predetermined time period before a time when a terminal device shoots an object to be verified, where the sensor data is used to represent a motion sequence of the terminal device;
a processing module 404, which processes the sensor data to obtain a motion sequence feature;
and the recognition module 406 is configured to input the motion sequence characteristics into a reproduction recognition model to predict whether the shot image of the object to be verified is a reproduced image, where the reproduction recognition model is obtained based on motion sequence characteristics when the target object is shot and a reproduction label of the target object through training.
Optionally, the obtaining module 402 includes:
the first acquisition unit acquires data sensed by a sensor of the terminal device in a first time period from a time point of starting a shooting interface to a time point of triggering a shooting action and takes the data as the sensor data.
Optionally, the first time period is any time period between the time point of starting the shooting interface and the time point of triggering the shooting action.
Optionally, the obtaining module 402 includes:
and the second acquisition unit is used for acquiring data sensed by the sensor of the terminal equipment in a second time period between the time point of starting the scanning interface and the time point of finishing scanning and taking the data as the sensor data.
Optionally, the second time period is any time period between the time point of starting the scanning interface and the time point of completing the scanning.
Optionally, the apparatus further comprises:
and the initialization module initializes the sensor when detecting the instruction of starting the shooting interface.
Optionally, the sensor data includes: direction sensor data characterizing a sequence of directions of motion of the terminal device.
Optionally, the sensor data further includes: speed sensor data characterizing a sequence of motion speeds of the terminal device.
Optionally, the apparatus further comprises:
and the feedback module is used for feeding back a message that the verification of the object to be verified fails if the image of the object to be verified is predicted to be a copied image.
As can be seen, in the embodiments of the present description, the sensor of the terminal device senses the motion sequence of the terminal device in a predetermined time period before the time when the physical entity of the target object and the image of the target object are captured, and trains the duplication recognition model by using the motion sequence characteristics thereof, so that the duplication recognition model learns the motion sequence characteristic difference corresponding to the terminal device for duplication and normal shooting, and the duplication recognition is performed, thereby effectively improving the accuracy of recognizing the duplicated image.
In addition, as for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to part of the description of the method embodiment. Further, it should be noted that, among the respective components of the apparatus of the present specification, the components thereof are logically divided according to the functions to be implemented, but the present specification is not limited thereto, and the respective components may be newly divided or combined as necessary.
Fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure, and referring to fig. 5, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, and may also include hardware required by other services. The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to form a device for identifying the copied image on a logic level. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
The network interface, the processor and the memory may be interconnected by a bus system. The bus may be an ISA (Industry Standard Architecture) bus, a PCI (peripheral component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The Memory may include a Random-Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory.
The processor is used for executing the program stored in the memory and specifically executing:
acquiring sensor data of a preset time period before the moment when a terminal device shoots an object to be verified, wherein the sensor data is used for representing a motion sequence of the terminal device;
processing the sensor data to obtain a motion sequence feature;
inputting the motion sequence characteristics into a reproduction identification model to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics when the target object is shot and a reproduction labeling label of the target object.
The method executed by the device for recognizing a copied image or a Master node (Master) as disclosed in the embodiment of fig. 4 in this specification can be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The apparatus for recognizing a copied image may also perform the method of fig. 3 and implement the method performed by the manager node.
Based on the same inventive creation, the present specification also provides a computer readable storage medium storing one or more programs, which when executed by an electronic device including a plurality of application programs, cause the electronic device to perform the method for recognizing a copied image provided by the corresponding embodiment of fig. 3.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (12)

1. A method of identifying a copied image, comprising:
acquiring sensor data of a preset time period before the moment when a terminal device shoots an object to be verified, wherein the sensor data is used for representing a motion sequence of the terminal device;
processing the sensor data to obtain a motion sequence feature;
inputting the motion sequence characteristics into a reproduction identification model to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics when the target object is shot and a reproduction labeling label of the target object.
2. The method according to claim 1, wherein the acquiring of the sensor data of the predetermined time period before the moment when the terminal device shoots the object to be verified comprises:
and acquiring data sensed by a sensor of the terminal equipment in a first time period from the time point of starting the shooting interface to the time point of triggering the shooting action, and taking the data as the sensor data.
3. The method according to claim 2, wherein the first time period is any time period between the time point of starting the shooting interface and the time point of triggering the shooting action.
4. The method according to claim 1, wherein the acquiring of the sensor data of the predetermined time period before the moment when the terminal device shoots the object to be verified comprises:
and acquiring data sensed by a sensor of the terminal equipment in a second time period between the time point of starting the scanning interface and the time point of finishing scanning, and taking the data as the sensor data.
5. The method of claim 4, wherein the second time period is any time period between the point in time when the scan interface is turned on and the point in time when the scan is completed.
6. The method of claim 2 or 4, further comprising:
and when an instruction of starting a shooting interface is detected, initializing the sensor.
7. The method of claim 1, the sensor data comprising: direction sensor data characterizing a sequence of directions of motion of the terminal device.
8. The method of claim 7, the sensor data further comprising: speed sensor data characterizing a sequence of motion speeds of the terminal device.
9. The method of claim 1, further comprising:
and if the image of the object to be verified is predicted to be a copied image, feeding back a message that the verification of the object to be verified fails.
10. An apparatus for recognizing a reproduced image, comprising:
the device comprises an acquisition module, a verification module and a verification module, wherein the acquisition module is used for acquiring sensor data of a preset time period before the moment when the terminal equipment shoots an object to be verified, and the sensor data is used for representing a motion sequence of the terminal equipment;
the processing module is used for processing the sensor data to obtain motion sequence characteristics;
and the recognition module is used for inputting the motion sequence characteristics into a reproduction recognition model so as to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction recognition model is obtained based on motion sequence characteristics when the target object is shot and reproduction labeling labels of the target object.
11. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring sensor data of a preset time period before the moment when a terminal device shoots an object to be verified, wherein the sensor data is used for representing a motion sequence of the terminal device;
processing the sensor data to obtain a motion sequence feature;
inputting the motion sequence characteristics into a reproduction identification model to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics when the target object is shot and a reproduction labeling label of the target object.
12. A computer-readable storage medium having a computer program stored thereon, which when executed by a processor, performs the operations of:
acquiring sensor data of a preset time period before the moment when a terminal device shoots an object to be verified, wherein the sensor data is used for representing a motion sequence of the terminal device;
processing the sensor data to obtain a motion sequence feature;
inputting the motion sequence characteristics into a reproduction identification model to predict whether the shot image of the object to be verified is a reproduction image, wherein the reproduction identification model is obtained by training based on the motion sequence characteristics when the target object is shot and a reproduction labeling label of the target object.
CN201910838097.6A 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image Active CN110648314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838097.6A CN110648314B (en) 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838097.6A CN110648314B (en) 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image

Publications (2)

Publication Number Publication Date
CN110648314A true CN110648314A (en) 2020-01-03
CN110648314B CN110648314B (en) 2023-08-04

Family

ID=69010067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838097.6A Active CN110648314B (en) 2019-09-05 2019-09-05 Method, device and equipment for identifying flip image

Country Status (1)

Country Link
CN (1) CN110648314B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215232A (en) * 2020-10-10 2021-01-12 平安科技(深圳)有限公司 Certificate verification method, device, equipment and storage medium
CN113222952A (en) * 2021-05-20 2021-08-06 支付宝(杭州)信息技术有限公司 Method and device for identifying copied image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292269A (en) * 2017-06-23 2017-10-24 中国科学院自动化研究所 Facial image false distinguishing method, storage, processing equipment based on perspective distortion characteristic
CN107710221A (en) * 2015-06-12 2018-02-16 北京释码大华科技有限公司 A kind of method, apparatus and mobile terminal for being used to detect live subject
CN108875688A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 A kind of biopsy method, device, system and storage medium
CN109325933A (en) * 2017-07-28 2019-02-12 阿里巴巴集团控股有限公司 A kind of reproduction image-recognizing method and device
CN109635539A (en) * 2018-10-30 2019-04-16 华为技术有限公司 A kind of face identification method and electronic equipment
CN109815960A (en) * 2018-12-21 2019-05-28 深圳壹账通智能科技有限公司 Reproduction image-recognizing method, device, equipment and medium based on deep learning
CN110023946A (en) * 2016-07-05 2019-07-16 吴业成 Spoofing attack detection during picture catching at the scene
CN110046644A (en) * 2019-02-26 2019-07-23 阿里巴巴集团控股有限公司 A kind of method and device of certificate false proof calculates equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107710221A (en) * 2015-06-12 2018-02-16 北京释码大华科技有限公司 A kind of method, apparatus and mobile terminal for being used to detect live subject
CN110023946A (en) * 2016-07-05 2019-07-16 吴业成 Spoofing attack detection during picture catching at the scene
CN107292269A (en) * 2017-06-23 2017-10-24 中国科学院自动化研究所 Facial image false distinguishing method, storage, processing equipment based on perspective distortion characteristic
CN109325933A (en) * 2017-07-28 2019-02-12 阿里巴巴集团控股有限公司 A kind of reproduction image-recognizing method and device
CN108875688A (en) * 2018-06-28 2018-11-23 北京旷视科技有限公司 A kind of biopsy method, device, system and storage medium
CN109635539A (en) * 2018-10-30 2019-04-16 华为技术有限公司 A kind of face identification method and electronic equipment
CN109815960A (en) * 2018-12-21 2019-05-28 深圳壹账通智能科技有限公司 Reproduction image-recognizing method, device, equipment and medium based on deep learning
CN110046644A (en) * 2019-02-26 2019-07-23 阿里巴巴集团控股有限公司 A kind of method and device of certificate false proof calculates equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215232A (en) * 2020-10-10 2021-01-12 平安科技(深圳)有限公司 Certificate verification method, device, equipment and storage medium
CN112215232B (en) * 2020-10-10 2023-10-24 平安科技(深圳)有限公司 Certificate verification method, device, equipment and storage medium
CN113222952A (en) * 2021-05-20 2021-08-06 支付宝(杭州)信息技术有限公司 Method and device for identifying copied image
CN113222952B (en) * 2021-05-20 2022-05-24 蚂蚁胜信(上海)信息技术有限公司 Method and device for identifying copied image

Also Published As

Publication number Publication date
CN110648314B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
TWI726194B (en) Image-based vehicle damage assessment method, device and electronic equipment
WO2020042800A1 (en) Auxiliary method for capturing damage assessment image of vehicle, device, and apparatus
CN106650662B (en) Target object shielding detection method and device
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN109727275B (en) Object detection method, device, system and computer readable storage medium
CN102096805B (en) Apparatus and method for registering plurality of facial images for face recognition
CN109597943B (en) Learning content recommendation method based on scene and learning equipment
CN110705532B (en) Method, device and equipment for identifying copied image
CN107679578B (en) Target recognition algorithm testing method, device and system
EP2715613A2 (en) Automatically optimizing capture of images of one or more subjects
CN110348393B (en) Vehicle feature extraction model training method, vehicle identification method and equipment
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN112672145B (en) Camera shooting function detection method and device
WO2020114105A1 (en) Comparison method based on multiple facial images, apparatus and electronic device
CN110648314B (en) Method, device and equipment for identifying flip image
CN110348392B (en) Vehicle matching method and device
CN110263805B (en) Certificate verification and identity verification method, device and equipment
CN111368944B (en) Method and device for recognizing copied image and certificate photo and training model and electronic equipment
CN111325715A (en) Camera color spot detection method and device
CN109040594A (en) Photographic method and device
CN113780212A (en) User identity verification method, device, equipment and storage medium
CN111432134A (en) Method and device for determining exposure time of image acquisition equipment and processor
CN112417970A (en) Target object identification method, device and electronic system
CN112565602A (en) Method and apparatus for controlling image photographing apparatus, and computer-readable storage medium
TW202242803A (en) Positioning method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant