WO2018161298A1 - 图像篡改取证方法及装置 - Google Patents

图像篡改取证方法及装置 Download PDF

Info

Publication number
WO2018161298A1
WO2018161298A1 PCT/CN2017/076106 CN2017076106W WO2018161298A1 WO 2018161298 A1 WO2018161298 A1 WO 2018161298A1 CN 2017076106 W CN2017076106 W CN 2017076106W WO 2018161298 A1 WO2018161298 A1 WO 2018161298A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
target object
tested
parallelism
Prior art date
Application number
PCT/CN2017/076106
Other languages
English (en)
French (fr)
Inventor
谭铁牛
董晶
王伟
彭勃
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2017/076106 priority Critical patent/WO2018161298A1/zh
Priority to US16/336,918 priority patent/US10600238B2/en
Publication of WO2018161298A1 publication Critical patent/WO2018161298A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0201Image watermarking whereby only tamper or origin are detected and no embedding takes place
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • the invention relates to the field of computer vision and image recognition technology, and in particular relates to an image tampering forensic method and device.
  • Digital image blind forensic technology as a technology that does not rely on any pre-signature extraction or pre-embedded information to identify the authenticity and source of images, is gradually becoming a new research hotspot in the field of multimedia security, and has broad application prospects.
  • the digital image blind forensics technology includes various forensic clues, such as copy-move, multiple JPEG compression, high-frequency statistical features of images, illumination inconsistency and geometric inconsistency, including various forensic methods.
  • the forensic method based on the inconsistency clue in the scene uses the computer vision method to estimate the variables in the scene, which is suitable for the tampering evidence of low quality pictures and has better post processing robustness.
  • the present invention proposes an image tampering forensics method based on a new scene forensic clue to improve the detection accuracy of the forensic method based on the inconsistency cues in the scene.
  • the present invention proposes an image tampering forensics method based on plane contact constraints, which is not only suitable for tamper detection of low quality pictures, but also improves field based The accuracy of the detection method of the inconsistency clues in the scene.
  • the present invention also provides an image tampering forensic device.
  • a technical solution for an image tampering forensic method in the present invention is:
  • the method includes:
  • the observation clue marking the image to be tested specifically comprising:
  • the feature observation point includes a contour point of the target object; the straight line segments in the upward direction comprise a plurality of parallel straight line segments.
  • the marking the contour point of the target object includes: marking the contour points of the target object by using a manual interactive mouse drag method;
  • the end points of each line segment are marked, and specifically include:
  • the three-dimensional deformation model of the category object to which the target object belongs is specifically configured to include:
  • the three-dimensional deformation model is constructed based on all semantically corresponding 3D sample models and using principal component analysis.
  • the acquiring a plurality of 3D sample models of samples classified by the category object to which the target object belongs includes: acquiring a 3D sample model preset in the drawing software, and/or acquiring a 3D sample model of each sample by using a 3D model scanning device. .
  • the semantic correspondence of each vertex of each 3D sample model includes: performing semantic correspondence on the 3D sample model by using a non-rigid registration method.
  • constructing the three-dimensional deformation model specifically comprising:
  • each one-dimensional column vector corresponding to each 3D sample model; wherein each element of the one-dimensional column vector is a three-dimensional coordinate of each vertex in the 3D sample model ;
  • a one-dimensional column vector of all 3D sample models is spliced column by column to obtain a 3D sample model matrix
  • the principal component analysis method is used to analyze the 3D sample model matrix to obtain a three-dimensional deformation model of the class object to which the target object belongs.
  • a preferred technical solution provided by the present invention is that: estimating the three-dimensional normal vector of the support plane according to the observation clue, specifically:
  • the endpoint is sampled multiple times to obtain multiple sets of three-dimensional normal vectors.
  • sampling the end points of each straight line segment specifically including:
  • the two-dimensional coordinates of the center point of each end point are set to mean, the measurement uncertainty of each center point is a standard deviation, and the end points of each straight line segment are sampled by a Gaussian distribution sampling method.
  • the calculating the blanking point of the image to be tested in each direction comprises: calculating a blanking point in each direction by using a maximum likelihood estimation method
  • the method for constructing a hidden line equation of the support plane comprises: constructing a hidden line equation by using a two-point linear equation calculation formula.
  • n is a three-dimensional normal vector of the support plane in a camera coordinate system
  • the K is a matrix of parameters in the camera
  • the T is a matrix transpose symbol
  • the l is a hidden line equation.
  • a preferred technical solution provided by the present invention is: estimating the three-dimensional posture of the target object according to the observation clue and the three-dimensional deformation model, specifically including:
  • the parameter function is initialized a plurality of times to obtain a plurality of optimized three-dimensional posture parameters and three-dimensional shape parameters.
  • N and n are respectively a total number and a sequence number of feature observation points of the target object in the image to be tested;
  • the c n is an nth feature observation point of the target object, The mth feature observation point of the two-dimensional projection of the three-dimensional deformation model;
  • Characteristic observation points c n and feature observation points The square of the Euclidean distance between the two;
  • the ⁇ p and ⁇ s are respectively a three-dimensional attitude parameter and a three-dimensional shape parameter of the target object;
  • the ⁇ c is an in-camera parameter;
  • ( ⁇ s ) n is the nth component of the three-dimensional shape parameter of the target object; and the ⁇ n is a standard deviation of the nth principal component direction when constructing the three-dimensional deformation model by principal component analysis.
  • the k is a preset constant.
  • performing the optimization calculation on the objective function includes optimizing the target function by using an iterative closest point algorithm, specifically including:
  • Parameter optimization is performed on the modified three-dimensional deformation model, and the corresponding relationship between the parameter-optimized three-dimensional deformation model and its two-dimensional projection is re-corrected until the residual of the objective function satisfies the convergence condition or reaches a preset number of iterations;
  • the parameters include a three-dimensional pose parameter and a three-dimensional shape parameter.
  • a preferred technical solution provided by the present invention is: performing multiple parameter initialization on the target function, specifically including:
  • a plurality of parameters in the parameter dispersion area centered on the preset parameter value are randomly selected, and the plurality of parameters are respectively used as initial values of parameters for each optimization calculation of the objective function.
  • the method includes calculating the parallelism between the target object and the support plane, and/or between the plurality of target objects according to the following formula, specifically:
  • a set of distributions of planar normal vectors of a target object a distribution set of three-dimensional normal vectors of the support plane or a distribution set of plane normal vectors of another target object
  • Distribution set Distribution set An angle of the average direction
  • the p 0 is a distribution set Weighted average
  • the q 0 is a distribution set Weighted average
  • the Ang is an angle calculation function
  • weighted average g 0 is calculated as follows:
  • the distribution set a distribution set of planar normal vectors of the target object or a distribution set of three-dimensional normal vectors of the support plane;
  • the g a is a distribution set The first a normal vector, the A is a distribution set The total number of inner normal vectors;
  • the e a is the residual of the a-th normal vector g a : when the distribution set When the distribution is a plane normal vector of the target object, the residual e a is a residual that satisfies the convergence condition obtained by optimizing the objective function of the three-dimensional deformation model; when the distribution is To support the distribution set of three-dimensional normal vectors of the plane, the value of the residual e a is a fixed constant.
  • the method further includes: calculating, according to a parallelity probability density distribution of the real target object and the falsified target object in the image to be tested, determining whether the image to be tested is Tampering with the parallelism threshold and tampering probability of the image; specifically:
  • D is a parallelism between the target object and the support plane in the image to be tested
  • the P(y) 1
  • D) indicates a probability that the image to be tested is a tamper image when the parallelism of the target object is D
  • the image to be tested is equivalent to the prior probability of the tamper image and the real image;
  • the parallelism threshold is a parallelism D 50% corresponding to a tampering probability of 50% .
  • an image tampering forensic device in the present invention is:
  • the device includes:
  • An observation clue labeling module configured to mark an observation clue of the image to be tested; wherein the image to be tested includes a target object and a support plane having a planar contact relationship;
  • a three-dimensional deformation model building module configured to construct a three-dimensional deformation model of the category object to which the target object belongs;
  • a support plane normal vector estimation module configured to estimate a three-dimensional normal vector of the support plane according to the observation clue
  • a target object normal vector estimation module configured to estimate a three-dimensional posture of the target object according to the observation clue and a three-dimensional deformation model, and thereby obtain a plane normal vector of a plane of a side of the target object that is in contact with the support plane;
  • a judging module configured to calculate parallelism between the target object and the support plane, and/or between the plurality of target objects according to the three-dimensional normal vector and the plane normal vector, and determine the parallelism according to the parallelism Whether the image to be tested is a tamper image; wherein the parallelism is an angle of a normal vector of different planes.
  • the image tampering forensic method detects the parallelism between the target object having a plane contact relationship and the support plane in the image to be tested, and determines whether the image to be tested is a tamper image according to the size of the parallelism.
  • the method does not depend on the tiny statistical features of the image in the image to be tested, and can effectively determine whether the low-quality image is a tamper image.
  • An image tampering forensic device provided by the present invention, wherein the supporting plane normal vector estimating module can estimate the three-dimensional normal vector of the supporting plane, and the target object normal vector estimating module can estimate the plane normal vector of the target object, and the determining module can be based on the above three-dimensional Normal vector and plane normal vector to calculate target object and support The parallelism between the planes, and/or between the plurality of target objects, and effectively determining whether the low quality image is a tamper image according to the parallelism.
  • FIG. 1 is a flowchart of an implementation of an image tampering forensics method according to an embodiment of the present invention
  • FIG. 2 is a schematic view showing a contact relationship between a target object and a support plane in the embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an image tampering forensic device according to an embodiment of the present invention.
  • FIG. 4 is a schematic view showing a three-dimensional deformation model of a shoe according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a three-dimensional posture obtained by fitting a three-dimensional deformation model according to a target object ID1;
  • FIG. 6 is a schematic diagram of a three-dimensional posture obtained by fitting a three-dimensional deformation model according to a target object ID2;
  • FIG. 7 is a schematic diagram of a normal vector distribution set
  • FIG. 9 is a schematic diagram 1 of a three-dimensional posture obtained by fitting a three-dimensional deformation model
  • FIG. 10 is a schematic diagram 2 of a three-dimensional posture obtained by fitting a three-dimensional deformation model
  • Figure 11 is a three-dimensional attitude diagram 3 obtained by fitting a three-dimensional deformation model
  • Figure 12 is a parallel probability density distribution of a real target object and a tamper target object
  • 11 observation clue labeling module
  • 12 3D deformation model building module
  • 13 support plane normal vector estimation module
  • 14 target object normal vector estimation module
  • 15 judgment module
  • 211 three-dimensional shoes in the shoe ID1
  • Attitude 221 three-dimensional posture of the right shoe in the shoe ID1
  • 231 three-dimensional posture of the left shoe in the shoe ID2
  • 232 preliminary three-dimensional posture of the left shoe in the shoe ID2
  • 233 middle three-dimensional posture of the left shoe in the shoe ID2
  • 234 the final three-dimensional posture of the left shoe in the shoe ID2
  • 241 the three-dimensional posture of the right shoe in the shoe ID2
  • 242 the preliminary three-dimensional posture of the right shoe in the shoe ID2
  • 243 the middle three-dimensional of the right shoe in the shoe ID2
  • Attitude; 244 final 3D pose of the right shoe in shoe ID2
  • 41 3D sample model of leather shoes
  • 42 3D sample model of leather shoes after semantic correspondence
  • 51 3D sample model of casual shoes
  • the plane contact relationship refers to the existence of a contact plane between the object and the part supporting the object.
  • a person standing on the ground has a plane contact relationship between the ground and the sole of the person; the road on the road, the road and the car There is a flat contact relationship between the undersides of the tires; the bottles on the table have a planar contact relationship between the table top and the bottom of the bottle. Since there is a plane contact relationship between the target object and the support plane, the coordinate system of the target object should be parallel to the coordinate system in which the support plane is located, and the coordinate system between different target objects having the same plane contact relationship with the support plane is also Should be parallel. Fig.
  • the image to be tested is an image formed by stitching with the PS software, and the stitched object is difficult to generate a true planar contact relationship with the support plane in the three-dimensional scene, that is, the image stitching may destroy the image to be tested.
  • Plane contact constraint When the image is a tamper image, for example, the image to be tested is an image formed by stitching with the PS software, and the stitched object is difficult to generate a true planar contact relationship with the support plane in the three-dimensional scene, that is, the image stitching may destroy the image to be tested. Plane contact constraint.
  • the image tampering forensic method proposed by the invention determines whether the image to be tested is a tamper image by detecting the angle between the target object and the plane normal vector between the support plane in the image, and if the angle is 0°, the target object and the support thereof are represented.
  • the plane normal vector between the planes is completely parallel, and the image to be tested is a real image; if the angle is larger, the possibility that the image to be tested is a tamper image is greater.
  • FIG. 1 exemplarily shows an implementation flow of an image tampering forensic method. As shown in the figure, in the embodiment, the following method can be used to determine whether the image to be tested is a tamper image. The specific steps are as follows:
  • Step S101 Label the observation clue of the image to be tested.
  • the marking of the observation clue in the embodiment includes two aspects of labeling the feature observation point of the target object and marking the end points of the straight line segments in two different directions parallel to the support plane in the image to be tested.
  • the feature observation point of the target object can be marked by the artificial interactive mouse drag method to form the outline of the target object.
  • the endpoints of the line segments can be implemented as follows:
  • the straight line segments in the respective directions should include a plurality of parallel straight line segments.
  • the straight line segments in each direction may include two parallel straight line segments, that is, the center points of the eight end points of the four straight line segments need to be marked.
  • a straight line segment parallel to the support plane existing in the image to be tested can also be selected for end point labeling.
  • the end point of the straight line segment marked in this embodiment includes the center point and its dispersed area, and the uncertainty of the distribution of the center point can be compensated by labeling the dispersed area.
  • Step S102 Construct a three-dimensional deformation model of the category object to which the target object belongs.
  • the category object to which the target object belongs is the upper concept of the target object, and refers to the object of the type to which the target object belongs.
  • the target object is a sports shoe
  • the object of the category of the sports shoe may be determined as a shoe, and may specifically include sports shoes, leather shoes or A three-dimensional deformation model of a shoe, such as a casual shoe, and a three-dimensional deformation model of a category object of a target object.
  • the three-dimensional deformation model of the category object to which the target object belongs may be constructed according to the following steps, including:
  • a 3D sample model stored in the downloading drawing software, such as CAD software, or a 3D model scanning device such as kinect may be used to directly perform three-dimensional scanning on the physical object of the sample to obtain a 3D sample model of each sample.
  • step (1) Perform semantic correspondence on the vertices of all 3D sample models obtained in step (1).
  • This step belongs to the field of model registration of the 3D model. Therefore, in this embodiment, the non-rigid registration methods such as non-rigid ICP can be used to semantically correspond to the vertices of each 3D sample model.
  • PCA Principal component analysis
  • each 3D sample model Represents each 3D sample model as a dimension vector.
  • the three-dimensional coordinates of each 3D sample model after semantic correspondence are obtained, and the three-dimensional coordinates of each vertex of each 3D sample model are used as the elements of the one-dimensional column vector.
  • the one-dimensional column vector of the 3D sample model in this embodiment may be expressed by the following formula (1):
  • each parameter in formula (1) is: 1 ⁇ i ⁇ N s , i and N s are respectively the serial number and total number of the 3D sample model after semantic correspondence; For the three-dimensional coordinates of the jth vertex in the i-th 3D sample model, 1 ⁇ j ⁇ N v , j and N v are respectively the sequence number and total number of vertices in a 3D sample model.
  • a contiguous vector of N s 3D sample models is spliced column by column to form a 3D sample model matrix.
  • the three-dimensional deformation model (S 0 , ⁇ ) of the object of the target object can be obtained, where S 0 is the average shape and ⁇ is the main change.
  • S 0 is the average shape
  • is the main change.
  • Direction matrix Each column in the main change direction matrix ⁇ represents a significant shape change direction, while the dimensions of each column are the same as the dimensions of the one-dimensional column vector S i of the 3D sample model.
  • a new shape of the class object to which the target object belongs can be expressed as a linear equation as shown in the following formula (2):
  • ⁇ s is a three-dimensional shape parameter.
  • Step S103 Estimating the three-dimensional normal vector of the support plane according to the observation clue, and the specific steps are as follows:
  • the end points of each straight line segment marked in two different directions parallel to the support plane in the image to be tested are sampled to obtain the endpoint coordinates of each straight line segment.
  • the endpoint of each straight line segment may be sampled by using a Gaussian distribution sampling method.
  • the two-dimensional coordinates of the center point of each end point may be set to the mean value of the Gaussian distribution, and the measurement uncertainty of each center point may be used. Set to the standard deviation of the Gaussian distribution.
  • the maximum likelihood estimation method can be used to calculate the blanking point in each direction, so that the likelihood probability of the endpoint of the observed straight line segment is the largest.
  • the calculation formula of the two-point linear equation can be used.
  • the hidden line equation is the equation that is the hidden line equation.
  • n is the three-dimensional normal vector of the support plane in the camera coordinate system
  • K is the matrix of the parameters in the camera
  • T is the matrix transpose symbol
  • l is the hidden line equation.
  • the parameters in the camera can be obtained by a conventional method. First, it can be assumed that the matrix of the camera's internal parameters is known, wherein the camera's optical center position is located at the center of the image to be tested, and the camera focal length is obtained through the camera's picture header file. For example, EXIF; second, it can be calculated from three sets of parallel straight lines in the image to be tested.
  • the end point of the straight line segment includes a certain dispersed area. Therefore, in this embodiment, the end point of the straight line segment can be sampled multiple times, for example, the sampling number is set to 500, and steps 1-3 are repeatedly performed, thereby obtaining multiple sets of supporting planes. Three-dimensional normal vector.
  • Step S104 Estimating the three-dimensional posture of the target object according to the observation clue and the three-dimensional deformation model, thereby obtaining a plane normal vector of the plane of the target object in contact with the support plane.
  • the target object contour obtained by marking the feature observation point of the target object in step S101 is used as a fitting target, and the three-dimensional deformation model constructed in step S102 is fitted, so that the three-dimensional posture parameter and the three-dimensional shape of the three-dimensional posture of the target object can be obtained.
  • the parameter finally determines the plane normal vector according to the three-dimensional pose parameter of the target object. Specifically:
  • each parameter in formula (4) is: N and n are the total number of characteristic observation points and serial numbers of the target object in the image to be tested, respectively; c n is the nth characteristic observation point of the target object, The mth feature observation point of the two-dimensional projection of the three-dimensional deformation model; Characteristic observation points c n and feature observation points The square of the Euclidean distance; ⁇ p and ⁇ s are the three-dimensional pose parameters and three-dimensional shape parameters of the target object; ⁇ c is the intra-camera parameter.
  • the optimization objective of the objective function in this embodiment is to optimize the Euclidean distance between the contour of the target object in the image to be measured and the contour of the two-dimensional projection of the three-dimensional deformation model by optimizing the three-dimensional posture parameter and the three-dimensional shape parameter of the target object.
  • the feature observation point c n is the contour point of the target object in the image to be tested, and the feature observation point A contour point for a two-dimensional projection of a three-dimensional deformation model.
  • the feature observation point can be obtained according to the following formula (5)
  • Equation (5) and (6) represent the operation of extracting its outline from a two-dimensional projection of a three-dimensional deformation model.
  • ( ⁇ s ) n is the nth component of the three-dimensional shape parameter of the target object;
  • ⁇ n is the standard deviation of the nth principal component direction when the three-dimensional deformation model is constructed by principal component analysis, and
  • k is a preset constant.
  • the iterative closest point algorithm may be used to optimize the target function, which specifically includes:
  • the parameters include three-dimensional posture parameters and three-dimensional shape parameters.
  • step (3) Optimize the parameters of the modified three-dimensional deformation model, and judge that the residual of the objective function satisfies the convergence condition or reaches the preset number of iterations: if the convergence condition is not met and/or the preset iteration is not reached The number of times returns to step (1) to re-correct the correspondence between the parameter-optimized three-dimensional deformation model and its two-dimensional projection.
  • the least square method can be used for parameter optimization.
  • the optimization result depends on the selection of the initial value of the parameter, because in order to reduce the uncertainty of the optimization result caused by the initial value of the parameter, multiple iterations are required in this embodiment.
  • the parameter function may be initialized according to the following steps. Specifically, first, according to a given preset parameter value, a parameter dispersion area centered on the preset parameter value is set, and then the parameter is randomly selected in the dispersed area. One parameter is used as the initial value of the parameter when the parameter is initialized to the objective function.
  • the objective function corresponding to multiple target objects sharing a set of three-dimensional shape parameters can be simultaneously optimized to reduce parameter freedom. Degree, improve the fitting accuracy of the target object. For example, for a person standing on the ground, there is a planar contact relationship between the two shoes of the person and the ground, and the two shoes satisfy the coplanar constraint and share a set of three-dimensional shape parameters. Therefore, the objective function of the two shoes can be optimized at the same time.
  • Step S105 Calculate the parallelism between the target object and the support plane, and/or between the plurality of target objects, and determine whether the image to be tested is a tamper image according to the parallelism.
  • the parallelism of different planes is used to evaluate the attitude consistency of the image to be tested, and the parallelism between different planes can be represented by the angle between the normal vectors of different planes.
  • a plurality of sets of three-dimensional normal vectors are obtained in the three-dimensional normal vector estimation process of the support plane, and in the foregoing step S104, the planar normal vector estimation process of the plane on the side of the target object contacting the support plane is obtained.
  • Multiple sets of planar normal vectors are obtained.
  • the three-dimensional normal vector and the plane normal vector are regarded as one point on the plane, that is, the set can be used.
  • a collection of plane normal vectors representing a target object using a set
  • a distribution set representing a three-dimensional normal vector of a support plane or a distribution of plane normal vectors of another target object.
  • the parallelism between the target object and the support plane, and/or between the plurality of target objects can be calculated according to the following formula (7):
  • each parameter in formula (7) is: Distribution set Distribution set The angle of the average direction; p 0 is the distribution set Weighted average, q 0 is the distribution set Weighted average; Ang is the angle calculation function.
  • Distribution set And distribution set The weighted average is calculated in the same way, so this embodiment sets one of the distribution sets.
  • the distribution set A set of distributions of planar normal vectors of the target object, or a set of distributions of three-dimensional normal vectors of the support plane, the set of distributions
  • the calculation method of the weighted average value is introduced.
  • the cloth set can be calculated according to the following formula (8).
  • each parameter in formula (8) is: g a is the distribution set The first a normal vector, A is the distribution set The total number of inner normal vectors; e a is the residual of the a-th normal vector g a : when the distribution set When the distribution of the plane normal vectors of the target object is set, the residual e a is the residual that satisfies the convergence condition obtained by optimizing the objective function of the three-dimensional deformation model; To support the set of three-dimensional normal vectors of the plane, the residual e a is a fixed constant, for example, 1.
  • the image to be tested is a tamper image by the size of the parallelism: the larger the parallelism, the image to be tested The more likely it is to tamper with the image.
  • the present invention further provides a parallelism threshold and tampering for determining whether the image to be tested is a tamper image, in the case where the parallelism of the real target object and the falsified target object in the known image to be tested is known.
  • Probability technical solution Specifically, the tampering probability of the image to be tested can be calculated according to the following steps:
  • y 1) indicates the probability density of the parallelism D when the image to be tested is a tamper image, f( D
  • y 0) indicates the probability density of the parallelism D when the image to be tested is a real image.
  • the parallelism between the plurality of target objects and the support plane can be compared, and the tampering probability of each target object calculated by the formula (10) is combined to comprehensively determine whether the image to be tested is a tamper image.
  • the tampering probability of each target object calculated by the formula (10) is combined to comprehensively determine whether the image to be tested is a tamper image. For example, two people standing on the ground, and set the shoes worn by one person as shoes A, the shoes worn by the other are shoes B, the two shoes of the person are the target objects, and the ground is the support plane.
  • the parallelism between shoe A and the ground is 0.59°, the probability of tampering is 15.6%; the parallelism between shoe B and the ground is 16.56°, the probability of tampering is 96.9%; the parallelism between the two pairs of shoes is 16.63°, the probability of tampering It is 96.9%.
  • the parallelism D 50% 4.61° is calculated according to the formula (10), and the shoe A is a true image due to 0.59° ⁇ 4.61°, 16.56°> 4.61°, the shoe B is a tamper image, so the image to be tested "the person standing on the ground” is a tampering image.
  • the image tampering forensic method provided by the embodiment is based on the computer vision and the image recognition technology, and determines whether the image to be tested is a tamper image by detecting the parallelism between the target object and the support plane having a plane contact relationship in the image to be tested. It does not depend on the statistical characteristics of the tiny images in the image to be measured, so it can effectively tamper evidence for low-quality images.
  • the following is an example of the image to be tested, including the ground, the wall, the ceiling, the two people standing on the ground, and the bucket placed on the ground, to verify the effect of the image tampering and forensic method provided by the present invention, specifically:
  • the image to be tested includes a plurality of target objects and supporting planes having a planar contact relationship.
  • the shoes of the person standing on the ground are selected as the target object, the ground is used as the supporting plane, and the left side of the image to be tested is
  • the shoes of the standing person are set to ID1
  • the shoes of the person standing on the right side are set to ID2, while the person in the bucket and the left side is a real image, and the person on the right side is an image stitched to the image to be tested.
  • each straight line segment includes two parallel straight line segments.
  • the end points of each straight line segment are marked by a manual interactive point selection method.
  • the 3D sample model of the footwear sample in the embodiment of the present invention mainly includes a 3D sample model of a sports shoe, a 3D sample model of a shoe, and a 3D sample model of a casual shoe, and the like.
  • FIG. 8 exemplarily shows a semantically corresponding 3D sample model, as shown, a 3D sample model 41 of a leather shoe before the first behavior model registration, a 3D sample model 51 of a casual shoe, and a 3D sample model of a sports shoe. 61.
  • FIG. 4 is a schematic view showing a three-dimensional deformation model of a shoe.
  • FIG. 5 exemplarily shows a three-dimensional posture obtained by fitting a three-dimensional deformation model according to the target object ID1.
  • the left image is the three-dimensional posture 211 of the left shoe in the shoe ID1
  • the right image is the shoe ID1.
  • the three-dimensional posture of the right shoe is 221 .
  • FIG. 6 exemplarily shows a three-dimensional posture display obtained by fitting a three-dimensional deformation model according to the target object ID2.
  • the left image is the three-dimensional posture 231 of the left shoe in the shoe ID2, and the right image is the right in the shoe ID2.
  • the three-dimensional posture of the side shoes is 241.
  • 9 to 14 exemplarily show a three-dimensional posture obtained by fitting a three-dimensional deformation model.
  • the left image in Fig. 9 is a preliminary three-dimensional posture 232 of the left shoe in the shoe ID2, and the right image is a shoe.
  • the left image in Fig. 10 is the middle three-dimensional posture 233 of the left shoe in the shoe ID2
  • the right image is the middle three-dimensional posture 243 of the right shoe in the shoe ID2, the left side in Fig. 11
  • the image is the final three-dimensional pose 234 of the left shoe in shoe ID2, and the right image is the final three-dimensional pose 244 of the right shoe in shoe ID2.
  • Fig. 7 exemplarily shows a set of normal vector distributions, as shown, the horizontal and vertical coordinates represent the azimuth and zenith of the normal vector, respectively.
  • the plane normal vector distribution 72 of the sub-ID2 and the three-dimensional normal vector distribution 73 of the ground are both presented in a two-dimensional coordinate in the form of a point set, and one point in each point set represents a normal vector.
  • Fig. 12 exemplarily shows the parallelism probability density distribution of the real target object and the falsified target object. As shown, the intersection of the two curves is the parallelism threshold, that is, the point at which the tampering probability is 50%.
  • the parallelism threshold D 50% 4.61° is calculated by the formula (10), and the parallelity between the bucket and the ground in the image to be tested is calculated to be less than 4.61°, and the bucket is judged to be a real image.
  • the parallel target detection and image judgment are performed by the known real target object and the spliced target object in the image to be measured, and the judgment result is the same as the known result, which proves that the image tampering forensic method provided by the present invention can effectively detect the image to be tested. Whether it is tampering with the image.
  • the embodiment of the present invention further provides an image tampering forensics device based on the same technical concept as the method embodiment.
  • the image tampering and forensic device will be specifically described below with reference to the accompanying drawings.
  • FIG. 3 exemplarily shows the structure of an image tampering forensics device in the embodiment of the present invention.
  • the image tampering forensic device in the embodiment may include an observation clue labeling module 11, a three-dimensional deformation model building module 12, and a support.
  • the observation clue labeling module 11 is used to mark the observation clue of the image to be tested; the three-dimensional deformation model building module 12 is used to construct a three-dimensional deformation model of the category object to which the target object belongs; the support plane normal vector estimation module 13 is configured to estimate the basis according to the observation clue The three-dimensional normal vector of the support plane is used; the target object normal vector estimation module 14 is configured to estimate the three-dimensional posture of the target object according to the observation clue and the three-dimensional deformation model, thereby obtaining a plane normal vector of the plane of the target object contacting the support plane; The module 15 is configured to calculate a parallelism between the target object and the support plane, and/or between the plurality of target objects according to the three-dimensional normal vector and the plane normal vector, and determine whether the image to be tested is a tamper image according to the parallelism.
  • the observation clue labeling module 11 can also be used to label the first labeling unit of the feature observation point of the target object in the image to be tested, and to mark two different directions parallel to the support plane in the image to be tested.
  • the second labeling unit of the endpoint of the straight line segment can also be used to label the first labeling unit of the feature observation point of the target object in the image to be tested, and to mark two different directions parallel to the support plane in the image to be tested.
  • the three-dimensional deformation model building module 12 in this embodiment may further include a model registration unit and a model construction unit.
  • the model registration unit is configured to acquire a plurality of 3D sample models of samples classified by the category object to which the target object belongs, and perform semantic correspondence on each vertex of each 3D sample model; the model construction unit is configured to perform semantic correspondence according to all
  • the 3D sample model is constructed using principal component analysis to construct a three-dimensional deformation model.
  • the support plane normal vector estimation module 13 in this embodiment may further include a blanking point calculation unit, a blanking line calculation unit, and a three-dimensional normal vector calculation unit.
  • the blanking point calculation unit is configured to sample end points of each straight line segment marked in two different directions parallel to the support plane in the image to be tested, and calculate blanking of the image to be tested in each direction according to each endpoint obtained by sampling.
  • a hidden line calculation unit is configured to construct a hidden line equation of the support plane according to the three-dimensional homogeneous coordinates of the blanking point; a three-dimensional normal vector calculation unit for calculating the support plane according to the hidden line equation and the camera internal parameter Three-dimensional normal vector.
  • the target object normal vector estimation module 14 in this embodiment may further include an objective function construction unit, an objective function optimization calculation unit, and a plane normal vector calculation unit.
  • the objective function constructing unit is configured to construct an objective function for fitting the three-dimensional deformation model according to the feature observation point of the target object in the image to be tested and the three-dimensional deformation model; and the objective function optimization calculation unit is used for the target
  • the function performs optimization calculation to obtain the optimized 3D pose parameters and 3D shape parameters of the target object;
  • the plane normal vector calculation unit is used to calculate the plane normal vector of the plane of the target object in contact with the support plane according to the 3D pose parameters.
  • the judging module 15 in this embodiment may further include a parallelism calculating unit, and the parallelism calculating model included therein is as shown in the formulas (7) and (8).
  • the embodiment further provides a preferred embodiment of the image tampering forensics device.
  • the image tampering forensic device in the embodiment further includes a parallelism threshold value calculation unit and a tampering probability calculation unit.
  • the calculation model of the tampering probability calculation unit is as shown in the formulas (9) and (10);
  • the parallelism threshold calculation unit is configured to calculate the parallelism D 50% when the tamper probability is 50% , and the parallelism D is 50% is used as the parallelism threshold.
  • the embodiment of the image tampering and forensic device may be used to perform the above-described image tampering and forensic method embodiment.
  • the technical principle, the technical problems solved, and the technical effects are similar, and those skilled in the art can clearly understand that Convenient and concise, the image described above is falsified for evidence
  • the above image tampering forensics device further includes some other well-known structures, such as a processor, a controller, a memory, etc., wherein the memory includes but is not limited to random access memory, flash memory, read only memory, programmable read only memory. , volatile memory, non-volatile memory, serial memory, parallel memory or registers, etc., including but not limited to CPLD/FPGA, DSP, ARM processor, MIPS processor, etc., in order to unnecessarily obscure the present disclosure For example, these well-known structures are not shown in FIG.
  • modules in the devices in the embodiments can be adaptively changed and placed in one or more devices different from the embodiment.
  • the modules or units or components of the embodiments may be combined into one module or unit or component, and further they may be divided into a plurality of sub-modules or sub-units or sub-components.
  • any combination of the features disclosed in the specification, including the accompanying claims, the abstract and the drawings, and any methods so disclosed, or All processes or units of the device are combined.
  • Each feature disclosed in this specification (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar purpose.
  • the various component embodiments of the present invention may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the functionality of some or all of the servers, clients, in accordance with embodiments of the present invention.
  • the invention may also be implemented as a device or device program (e.g., a PC program and a PC program product) for performing some or all of the methods described herein.
  • a program implementing the present invention may be stored on a PC readable medium or may have the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种图像篡改取证方法及装置,所述方法包括标注待测图像的观测线索(S101);构建目标物体所属类别物体的三维形变模型(S102);依据观测线索估计支撑平面的三维法向量(S103);依据观测线索和三维形变模型估计目标物体的三维姿态,进而得到目标物体中与支撑平面接触一侧所在平面的平面法向量(S104);计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度,并依据平行度判断待测图像是否为篡改图像(S105)。与现有技术相比,该图像篡改取证方法及装置,依据目标物体与支撑平面之间,和/或多个目标物体之间平行度的大小判断待测图像是否为篡改图像,能够有效判断低质量图像是否为篡改图像。

Description

图像篡改取证方法及装置 技术领域
本发明涉及计算机视觉与图像识别技术领域,具体涉及一种图像篡改取证方法及装置。
背景技术
数字图像盲取证技术作为一种不依赖任何预签名提取或预嵌入信息来鉴别图像真伪和来源的技术,正逐步成为多媒体安全领域新的研究热点,且有着广泛的应用前景。目前,数字图像盲取证技术根据不同的取证线索,如拷贝移动(copy-move)、多次JPEG压缩、图像高频统计特征、光照不一致性和几何不一致性等,包括多种取证方法。其中,基于场景中不一致性线索的取证方法利用计算机视觉方法估计场景中的变量,适用于低质量图片的篡改取证且具有较好的后处理鲁棒性。
但是,基于场景中不一致性线索的取证方法一般仅适用于基于某一种场景的图像篡改取证,因此可能限制其检测结果的准确性。例如,文献“Iuliani,Massimo,Giovanni Fabbri,and Alessandro Piva."Image splicing detection based on general perspective constraints."Information Forensics and Security(WIFS),2015IEEE International Workshop on.IEEE,2015”公开了一种基于场景中物体的高度比的图像篡改取证方法,文献“Peng,Bo,et al."Optimized 3D Lighting Environment Estimation for Image Forgery Detection."IEEE Transactions on Information Forensics and Security12.2(2017):479-494”公开了一种基于光照方向不一致的图像篡改取证方法,文献“Farid,Hany."A 3-D photo forensic analysis of the Lee Harvey Oswald backyard photo."Hanover,NH(2010).”公开了一种基于人工辅助3D场景重建分析的图像篡改取证方法。
有鉴于此,本发明提出了一种基于新的场景取证线索的图像篡改取证方法,以提高基于场景中不一致性线索的取证方法的检测准确性。
发明内容
为了满足现有技术的需要,本发明提出了一种基于平面接触约束的图像篡改取证方法,该取证方法不仅适用于低质量图片的篡改检测,还提高了基于场 景中不一致性线索的取证方法的检测准确性。同时,本发明还提供了一种图像篡改取证装置。
第一方面,本发明中一种图像篡改取证方法的技术方案是:
所述方法包括:
标注待测图像的观测线索;其中,所述待测图像包括具有平面接触关系的目标物体与支撑平面;
构建所述目标物体所属类别物体的三维形变模型;
依据所述观测线索估计所述支撑平面的三维法向量;
依据所述观测线索和三维形变模型估计所述目标物体的三维姿态,进而得到所述目标物体中与支撑平面接触一侧所在平面的平面法向量;
依据所述三维法向量和平面法向量,计算所述目标物体与支撑平面之间,和/或多个所述目标物体之间的平行度,并依据所述平行度判断所述待测图像是否为篡改图像;其中,所述平行度为不同平面的法向量的夹角。
进一步地,本发明提供的一个优选技术方案为:所述标注待测图像的观测线索,具体包括:
标注所述待测图像中目标物体的特征观测点,及标注所述待测图像中平行于支撑平面的两个不同方向上的直线段的端点;
其中,所述特征观测点包括所述目标物体的轮廓点;所述各方向上的直线段包括多个平行的直线段。
进一步地,本发明提供的一个优选技术方案为:
所述标注目标物体的轮廓点,具体包括:采用人工交互式的鼠标拖拽方法标注所述目标物体的各轮廓点;
所述标注各直线段的端点,具体包括:
采用人工交互式的鼠标点选方法标注所述各直线段两端的中心点;
依据所述直线段的边缘点模糊程度设定所述中心点的测量不确定度,并依据所述各测量不确定度标注所述中心点的分散区域。
进一步地,本发明提供的一个优选技术方案为:所述构建目标物体所属类别物体的三维形变模型,具体包括:
获取多个归类于所述目标物体所属类别物体的样本的3D样本模型,并对所述各3D样本模型的各顶点进行语义对应;
依据所有经过语义对应后的3D样本模型,并采用主成分分析法构建所述三维形变模型。
进一步地,本发明提供的一个优选技术方案为:
所述获取多个归类于目标物体所属类别物体的样本的3D样本模型,具体包括:获取预设在制图软件中的3D样本模型,和/或通过3D模型扫描设备获取各样本的3D样本模型。
进一步地,本发明提供的一个优选技术方案为:
所述对各3D样本模型的各顶点进行语义对应,具体包括:采用非刚体配准方法对3D样本模型进行语义对应。
进一步地,本发明提供的一个优选技术方案为:所述构建三维形变模型,具体包括:
依据所述各语义对应后的3D样本模型的三维坐标,构建与各3D样本模型对应的各一维列向量;其中,所述一维列向量的各元素为3D样本模型内各顶点的三维坐标;
将所有3D样本模型的一维列向量逐列拼接,得到3D样本模型矩阵;
采用所述主成分分析法分析所述3D样本模型矩阵,得到所述目标物体所属类别物体的三维形变模型。
进一步地,本发明提供的一个优选技术方案为:所述依据观测线索估计支撑平面的三维法向量,具体包括:
采样标注在待测图像中平行于支撑平面的两个不同方向上的各直线段的端点,并依据采样得到的各端点计算待测图像在所述两个方向的消隐点;
依据所述消隐点的三维齐次坐标,构建所述支撑平面的消隐线方程;其中,所述支撑平面的消隐线为所述消隐点的连接线所在的直线;
依据所述消隐线方程和相机内参数计算所述支撑平面的三维法向量;
其中,对所述端点进行多次采样,进而得到多组三维法向量。
进一步地,本发明提供的一个优选技术方案为:所述采样各直线段的端点,具体包括:
设定所述各端点的中心点的二维坐标为均值,所述各中心点的测量不确定度为标准差,并采用高斯分布采样方法对各直线段的端点进行采样。
进一步地,本发明提供的一个优选技术方案为:
所述计算待测图像在各方向的消隐点,具体包括:采用最大似然估计方法计算各方向的消隐点;
所述构建支撑平面的消隐线方程,具体包括:采用两点式直线方程计算公式构建消隐线方程。
进一步地,本发明提供的一个优选技术方案为:所述三维法向量的计算公式如下式所示:
n=KTl
其中,所述n为支撑平面在相机坐标系下的三维法向量,所述K为相机内参数的矩阵,所述T为矩阵转置符号,所述l为消隐线方程。
进一步地,本发明提供的一个优选技术方案为:所述依据观测线索和三维形变模型估计目标物体的三维姿态,具体包括:
依据所述待测图像中目标物体的特征观测点和所述三维形变模型构建拟合所述三维形变模型的目标函数,并对所述目标函数进行优化计算,得到优化后的所述目标物体的三维姿态参数和三维形状参数;
其中,对所述目标函数进行多次参数初始化,进而得到多组优化后的三维姿态参数和三维形状参数。
进一步地,本发明提供的一个优选技术方案为:所述目标函数的计算公式如下式所示:
Figure PCTCN2017076106-appb-000001
其中,所述N和n分别为所述待测图像中目标物体的特征观测点的总数和序号;所述cn为所述目标物体的第n个特征观测点,所述
Figure PCTCN2017076106-appb-000002
为三维形变模型的二维投影的第m个特征观测点;所述
Figure PCTCN2017076106-appb-000003
为特征观测点cn与特征观测点
Figure PCTCN2017076106-appb-000004
之间的欧式距离的平方;所述θp和θs分别为目标物体的三维姿态参数和三维形状参数;所述θc为相机内参数;
所述目标函数的约束条件如下式所示:
|(θs)n|≤kσn
其中,所述(θs)n为第所述目标物体的三维形状参数的第n个分量;所述σn为采用主成分分析法构建三维形变模型时第n个主成分方向的标准差,所述k为预设的常数。
进一步地,本发明提供的一个优选技术方案为:所述对目标函数进行优化计算包括采用迭代最近点算法优化所述目标函数,具体包括:
获取所述三维形变模型的二维投影的特征观测点中与所述待测图像中目标物体的特征观测点距离最近的各最近点,依据所述待测图像中目标物体的各 特征观测点与其对应的各最近点的对应关系修正所述三维形变模型与其二维投影的对应关系;
对修正后的三维形变模型进行参数优化,并重新修正所述参数优化后的三维形变模型与其二维投影的对应关系,直至所述目标函数的残差满足收敛条件或达到预设的迭代次数;所述参数包括三维姿态参数和三维形状参数。
进一步地,本发明提供的一个优选技术方案为:所述对目标函数进行多次参数初始化,具体包括:
随机选取以预设参数值为中心的参数分散区域内的多个参数,并将所述的多个参数分别作为对目标函数进行各次优化计算的参数初始值。
进一步地,本发明提供的一个优选技术方案为:所述方法包括按照下式计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度,具体为:
Figure PCTCN2017076106-appb-000005
其中,所述
Figure PCTCN2017076106-appb-000006
为一个目标物体的平面法向量的分布集合,所述
Figure PCTCN2017076106-appb-000007
为所述支撑平面的三维法向量的分布集合或另一个目标物体的平面法向量的分布集合;所述
Figure PCTCN2017076106-appb-000008
为分布集合
Figure PCTCN2017076106-appb-000009
与分布集合
Figure PCTCN2017076106-appb-000010
的平均方向的夹角;所述p0为分布集合
Figure PCTCN2017076106-appb-000011
的加权平均值,所述q0为分布集合
Figure PCTCN2017076106-appb-000012
的加权平均值;所述Ang为夹角计算函数;
任一分布集合
Figure PCTCN2017076106-appb-000013
的加权平均值g0的计算公式如下式所示:
Figure PCTCN2017076106-appb-000014
其中,所述分布集合
Figure PCTCN2017076106-appb-000015
为目标物体的平面法向量的分布集合或支撑平面的三维法向量的分布集合;所述ga为分布集合
Figure PCTCN2017076106-appb-000016
内第a个法向量,所述A为分布集合
Figure PCTCN2017076106-appb-000017
内法向量的总数;
所述ea为第a个法向量ga的残差:当所述分布集合
Figure PCTCN2017076106-appb-000018
为目标物体的平面法向量的分布集合时,所述残差ea的取值为对构建三维形变模型的目标函数进行优化计算得到的满足收敛条件的残差;当所述分布集合
Figure PCTCN2017076106-appb-000019
为支撑平面的三维法向量的分布集合,所述残差ea的取值为固定的常数。
进一步地,本发明提供的一个优选技术方案为:所述方法还包括依据所述待测图像内真实的目标物体和篡改的目标物体的平行度概率密度分布,计算用于判断待测图像是否为篡改图像的平行度阈值和篡改概率;具体为:
所述篡改概率的计算公式如下式所示:
Figure PCTCN2017076106-appb-000020
其中,所述y=1表示待测图像为篡改图像,y=0表示待测图像为真实图像;所述D为待测图像内目标物体与支撑平面之间的平行度;所述P(y=1|D)表示目标物体的平行度为D时待测图像为篡改图像的概率,所述f(D|y=1)表示待测图像为篡改图像时平行度D的概率密度,所述f(D|y=0)表示待测图像为真实图像时平行度D的概率密度;所述待测图像为篡改图像和真实图像的先验概率相当;
所述平行度阈值为篡改概率为50%时对应的平行度D50%
第二方面,本发明中一种图像篡改取证装置的技术方案是:
所述装置包括:
观测线索标注模块,用于标注待测图像的观测线索;其中,所述待测图像包括具有平面接触关系的目标物体与支撑平面;
三维形变模型构建模块,用于构建所述目标物体所属类别物体的三维形变模型;
支撑平面法向量估计模块,用于依据所述观测线索估计所述支撑平面的三维法向量;
目标物体法向量估计模块,用于依据所述观测线索和三维形变模型估计所述目标物体的三维姿态,进而得到所述目标物体中与支撑平面接触一侧所在平面的平面法向量;
判断模块,用于依据所述三维法向量和平面法向量,计算所述目标物体与支撑平面之间,和/或多个所述目标物体之间的平行度,并依据所述平行度判断所述待测图像是否为篡改图像;其中,所述平行度为不同平面的法向量的夹角。
与现有技术相比,上述技术方案至少具有以下有益效果:
1、本发明提供的一种图像篡改取证方法,通过检测待测图像中具有平面接触关系的目标物体与支撑平面之间的平行度,并依据平行度的大小判断待测图像是否为篡改图像,该方法不依赖于待测图像中微小的图像统计特征,能够有效判断低质量图像是否为篡改图像。
2、本发明提供的一种图像篡改取证装置,其支撑平面法向量估计模块可以估计支撑平面的三维法向量,目标物体法向量估计模块可以估计目标物体的平面法向量,判断模块可以依据上述三维法向量和平面法向量计算目标物体与支撑 平面之间,和/或多个目标物体之间的平行度,并依据平行度有效判断低质量图像是否为篡改图像。
附图说明
图1是本发明实施例中一种图像篡改取证方法的实施流程图;
图2是本发明实施例中目标物体与支撑平面的接触关系示意图;
图3是本发明实施例中一种图像篡改取证装置的结构示意图;
图4是本发明实施例中鞋子的三维形变模型示意图;
图5是依据目标物体ID1拟合三维形变模型得到的三维姿态示意图;
图6是依据目标物体ID2拟合三维形变模型得到的三维姿态示意图;
图7是法向量分布集合示意图;
图8是经过语义对应后的3D样本模型示意图;
图9是对三维形变模型拟合后得到的三维姿态示意图一;
图10是对三维形变模型拟合后得到的三维姿态示意图二;
图11是对三维形变模型拟合后得到的三维姿态示意图三;
图12是真实的目标物体与篡改的目标物体的平行度概率密度分布;
其中,11:观测线索标注模块;12:三维形变模型构建模块;13:支撑平面法向量估计模块;14:目标物体法向量估计模块;15:判断模块;211:鞋子ID1中左侧鞋子的三维姿态221:鞋子ID1中右侧鞋子的三维姿态;231:鞋子ID2中左侧鞋子的三维姿态;232:鞋子ID2中左侧鞋子的初步三维姿态;233:鞋子ID2中左侧鞋子的中间三维姿态;234:鞋子ID2中左侧鞋子的最终三维姿态;241:鞋子ID2中右侧鞋子的三维姿态;242:鞋子ID2中右侧鞋子的初步三维姿态;243:鞋子ID2中右侧鞋子的中间三维姿态;244:鞋子ID2中右侧鞋子的最终三维姿态;41:皮鞋的3D样本模型;42:经过语义对应后的皮鞋3D样本模型;51:休闲鞋的3D样本模型;52:经过语义对应后的休闲鞋3D样本模型;61:运动鞋的3D样本模型;62:经过语义对应后的运动鞋3D样本模型;71:鞋子ID1的平面法向量分布;72:鞋子ID2的平面法向量分布;73:地面的三维法向量分布。
具体实施方式
下面参照附图来描述本发明的优选实施方式。本领域技术人员应当理解的是,这些实施方式仅仅用于解释本发明的技术原理,并非旨在限制本发明的保护范围。
平面接触关系指的是物体与支撑该物体的部件之间存在一个接触平面,例如:地面上站立的人,则地面与人的鞋底之间存在平面接触关系;路上的汽车,则路面与汽车的轮胎底面之间存在平面接触关系;桌子上的瓶子,则桌面与瓶底之间存在平面接触关系。由于目标物体与支撑平面之间存在平面接触关系,则目标物体所在的坐标系应当与支撑平面所在的坐标系相平行,且与支撑平面具有相同平面接触关系的不同目标物体之间的坐标系也应当相平行。图2示例性示出了目标物体与支撑平面的接触关系,如图所示,O1z1与Opzp平行,且与O2z2平行。当图像为篡改图像时,例如待测图像为利用PS软件拼接形成的图像,被拼接的物体很难在三维场景中与支撑平面产生实际上的平面接触关系,即图像拼接可能破坏待测图像的平面接触约束。本发明提出的一种图像篡改取证方法,通过检测图像中目标物体与其支撑平面之间平面法向量的夹角大小判断待测图像是否为篡改图像,若夹角为0°则表示目标物体与其支撑平面之间平面法向量完全平行,待测图像为真实图像;若夹角越大则待测图像为篡改图像的可能性越大。
下面结合附图,对本发明实施例提供的一种图像篡改取证方法进行具体说明。
图1示例性示出了一种图像篡改取证方法的实施流程,如图所示,本实施例中可以按照下述步骤判断待测图像是否为篡改图像,具体步骤为:
步骤S101:标注待测图像的观测线索。
本实施例中标注观测线索包括标注目标物体的特征观测点和标注待测图像中平行于支撑平面的两个不同方向上的直线段的端点两个方面。
其中,标注目标物体的特征观测点可以采用人工交互式的鼠标拖拽方法标注目标物体的各轮廓点,形成目标物体的轮廓。
标注直线段的端点可以按照下述步骤实施:
1、采用人工交互式的鼠标点选方法标注各直线段两端的中心点。本实施例中各方向上的直线段应当包括多个平行的直线段,例如每个方向的直线段均可以包括两个平行的直线段,即需要标注四条直线段的八个端点的中心点。同时,还可以选取待测图像中已有的平行于支撑平面的直线段,进行端点标注。
2、由于待测图像的显示效果限制,即使是待测图像中已有的直线段也存在一定的模糊程度,因此还要依据各直线段的边缘点模糊程度设定各中心点的测量不确定度,再依据各测量不确定度标注各中心点的分散区域。综上,本实施例中标注的直线段端点包括中心点及其分散区域,通过标注分散区域可以弥补中心点分布的不确定性。
步骤S102:构建目标物体所属类别物体的三维形变模型。
其中,目标物体所属类别物体是目标物体的上位概念,指的是目标物体所属类型的物体,例如目标物体为运动鞋,可以确定运动鞋的所属类别物体为鞋子,具体可以包括运动鞋、皮鞋或休闲鞋等多种类型的鞋子,而目标物体所属类别物体的三维形变模型指的鞋子的三维形变模型。
本实施例中可以按照下述步骤构建目标物体所属类别物体的三维形变模型,具体包括:
1、获取多个归类于目标物体所属类别物体的样本的3D样本模型,这些样本与目标物体属于同一个类型。本实施例中可以通过获取下载制图软件中已存储的3D样本模型,如CAD软件,也可以采用kinect等3D模型扫描设备对样本的实物直接进行三维扫描,得到各样本的3D样本模型。
2、对步骤(1)得到的所有3D样本模型的各顶点进行语义对应。该步骤属于3D模型的模型配准领域,因此本实施例可以采用non-rigid ICP等非刚体配准方法对各3D样本模型的各顶点进行语义对应。
3、采用主成分分析法(principal component analysis,PCA)分析所有经过语义对应后的3D样本模型,以构建目标物体所属类别物体的三维形变模型。具体为:
(1)将每个3D样本模型表示为一个维列向量。首先获取经过语义对应后的各3D样本模型的三维坐标,将每个3D样本模型的各顶点的三维坐标作为一维列向量的各元素。本实施例中3D样本模型的一维列向量可以如下式(1)所示:
Figure PCTCN2017076106-appb-000021
公式(1)中各参数含义为:1≤i≤Ns,i和Ns分别为经过语义对应后的3D样本模型的序号和总数;
Figure PCTCN2017076106-appb-000022
为第i个3D样本模型中第j个顶点的三维坐标,1≤j≤Nv,j和Nv分别为一个3D样本模型中顶点的序号和总数。
(2)将Ns个3D样本模型的一个维列向量逐列拼接,构成3D样本模型矩阵。
(3)采用主成分分析法PCA分析步骤(2)得到的3D样本模型矩阵,可以得到目标物体所属类别物体的三维形变模型(S0,Φ),其中S0为平均形状,Φ为主变化方向矩阵。主变化方向矩阵Φ中每一列代表一个显著的形状变化方向,同时每一列的维度与3D样本模型的一维列向量Si的维度相同。本实施例中依据三维形 变模型(S0,Φ)可以将目标物体所属类别物体的一个新的形状表示为如下式(2)所示的线性方程:
S(θs)=S0+Φθs        (2)
其中,θs为三维形状参数。
步骤S103:依据观测线索估计支撑平面的三维法向量,具体步骤为:
1、首先对标注在待测图像中平行于支撑平面的两个不同方向上的各直线段的端点进行一次采样,得到各直线段的端点坐标。本实施例中可以采用高斯分布采样方法对各直线段的端点进行采样,具体地,可以将各端点的中心点的二维坐标设定为高斯分布的均值,将各中心点的测量不确定度设定为高斯分布的标准差。
2、基于不平行于成像平面的平行线会相交于消隐点(vanish point),可以依据各端点计算待测图像在两个不同方向的消隐点,将这两个消隐点连接即可以得到支撑平面的消隐线。
本实施例中可以采用最大似然估计方法计算各方向的消隐点,使得观测直线段端点的似然概率最大,在得到两个消隐点的坐标后可以采用两点式直线方程计算公式构建消隐线方程。
3、按照下式(3)计算支撑平面的三维法向量,具体为:
n=KTl       (3)
公式(3)中各参数含义为:n为支撑平面在相机坐标系下的三维法向量,K为相机内参数的矩阵,T为矩阵转置符号,l为消隐线方程。本实施例中相机内参数可以采用常规方法获取:一是可以假设相机的相机内参数的矩阵为已知的,其中相机光心位置位于待测图像的中心,相机焦距通过相机的图片头文件获得,如EXIF;二是可以通过待测图像中三组相互垂直的平行直线计算得到。
由前述步骤S101可知,直线段端点包括一定的分散区域,因此本实施例中可以对直线段端点进行多次采样,例如设置采样次数为500,重复执行步骤1-3,从而得到多组支撑平面的三维法向量。
步骤S104:依据观测线索和三维形变模型估计目标物体的三维姿态,进而得到目标物体中与支撑平面接触一侧所在平面的平面法向量。本实施例中以步骤S101中标注目标物体的特征观测点得到的目标物体轮廓为拟合目标,拟合步骤S102中构建的三维形变模型,从而可以得到目标物体三维姿态的三维姿态参数和三维形状参数,最后依据目标物体的三维姿态参数确定平面法向量。具体为:
1、依据待测图像中目标物体的特征观测点和三维形变模型构建拟合三维形变模型的目标函数,其计算公式如下式(4)所示:
Figure PCTCN2017076106-appb-000023
公式(4)中各参数含义为:N和n分别为待测图像中目标物体的特征观测点总数和序号;cn为目标物体的第n个特征观测点,
Figure PCTCN2017076106-appb-000024
为三维形变模型的二维投影的第m个特征观测点;
Figure PCTCN2017076106-appb-000025
为特征观测点cn与特征观测点
Figure PCTCN2017076106-appb-000026
之间的欧式距离的平方;θp和θs分别为目标物体的三维姿态参数和三维形状参数;θc为相机内参数。本实施例中目标函数的优化目标为通过优化目标物体的三维姿态参数和三维形状参数,最小化待测图像中目标物体的轮廓与三维形变模型二维投影的轮廓之间的欧式距离。本实施例特征观测点cn为待测图像中目标物体的轮廓点,特征观测点
Figure PCTCN2017076106-appb-000027
为三维形变模型的二维投影的轮廓点。
其中,可以按照下式(5)获取特征观测点
Figure PCTCN2017076106-appb-000028
Figure PCTCN2017076106-appb-000029
其中,目标函数的约束条件可以如下式(6)所示:
|(θs)n|≤kσn       (6)
公式(5)和(6)中各参数含义为:
Figure PCTCN2017076106-appb-000030
表示从三维形变模型二维投影中抽取其轮廓的操作。(θs)n为目标物体的三维形状参数的第n个分量;σn为采用主成分分析法构建三维形变模型时第n个主成分方向的标准差,k为预设的常数。
2、对目标函数进行优化计算,从而得到优化后的目标物体的三维姿态参数和三维形状参数。本实施例中可以采用迭代最近点算法优化所述目标函数,具体包括:
(1)对目标函数进行一次参数初始化。其中,参数包括三维姿态参数和三维形状参数。
(2)获取三维形变模型的二维投影的特征观测点中与待测图像中目标物体的特征观测点距离最近的各最近点,依据待测图像中目标物体的各特征观测点与其对应的各最近点的对应关系修正三维形变模型与其二维投影的对应关系。
(3)对修正后的三维形变模型进行参数优化,并判断目标函数的残差满足收敛条件或达到预设的迭代次数:若不满足收敛条件和/或未达到预设的迭代 次数,则返回步骤(1)重新修正参数优化后的三维形变模型与其二维投影的对应关系。其中,本实施例中可以采用最小二乘法进行参数优化。
由于公式(4)所示的目标函数是严重非凸的,其优化结果依赖于参数初始值的选取,因为为了减弱参数初始值导致的优化结果不确定性,本实施例中需要进行多次迭代计算并在每次迭代过程中均对目标函数进行参数初始化,例如设置参数初始化次数为20次,可以得到多组优化后的三维姿态参数和三维形状参数,进而可以得到多组目标物体的平面法向量。其中,可以按照下述步骤对目标函数进行参数初始化,具体为:首先依据给定的预设参数值,设置一个以该预设参数值为中心的参数分散区域,然后随机选取该参数分散区域内的一个参数作为对目标函数进行参数初始化时的参数初始值。
进一步地,本实施例中还提供了一种优化目标函数的优选技术方案,具体是:对共享一组三维形状参数的多个目标物体对应的目标函数,可以同时进行优化计算,以减少参数自由度,提高目标物体的拟合精度。例如,对于站在地面上的人,人的两只鞋子与地面之间存在平面接触关系,且这两只鞋子满足共面约束,又共享一组三维形状参数。因此可以对这两只鞋子对于的目标函数同时进行优化计算。
步骤S105:计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度,并依据平行度判断待测图像是否为篡改图像。本实施例中利用不同平面间的平行度评价待测图像的姿态一致性,不同平面之间的平行度可以用不同平面的法向量的夹角表示。
由前述步骤S103可知在支撑平面的三维法向量估计过程中,得到了多组三维法向量,由前述步骤S104可知在目标物体中与支撑平面接触一侧所在平面的平面法向量估计过程中,得到了多组平面法向量。本实施例中将三维法向量和平面法向量均看作平面上的一个点,即可以用集合
Figure PCTCN2017076106-appb-000031
表示一个目标物体的平面法向量的分布集合,用集合
Figure PCTCN2017076106-appb-000032
表示支撑平面的三维法向量的分布集合或另一个目标物体的平面法向量的分布集合。具体地,可以按照下式(7)计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度:
Figure PCTCN2017076106-appb-000033
公式(7)中各参数含义为:
Figure PCTCN2017076106-appb-000034
为分布集合
Figure PCTCN2017076106-appb-000035
与分布集合
Figure PCTCN2017076106-appb-000036
的平均方向的夹角;p0为分布集合
Figure PCTCN2017076106-appb-000037
的加权平均值,q0为分布集合
Figure PCTCN2017076106-appb-000038
的加权平均值;Ang为夹角计算函数。
其中,分布集合
Figure PCTCN2017076106-appb-000039
和分布集合
Figure PCTCN2017076106-appb-000040
的加权平均值的计算方法相同,因此本实施例设定一个任一分布集合
Figure PCTCN2017076106-appb-000041
该分布集合
Figure PCTCN2017076106-appb-000042
既可以为目标物体的平面法向量的分布集合,也可以为支撑平面的三维法向量的分布集合,以该分布集合
Figure PCTCN2017076106-appb-000043
为例介绍加权平均值的计算方法,具体地,可以按照下式(8)计算布集合
Figure PCTCN2017076106-appb-000044
的加权平均值g0:
Figure PCTCN2017076106-appb-000045
公式(8)中各参数含义为:ga为分布集合
Figure PCTCN2017076106-appb-000046
内第a个法向量,A为分布集合
Figure PCTCN2017076106-appb-000047
内法向量的总数;ea为第a个法向量ga的残差:当分布集合
Figure PCTCN2017076106-appb-000048
为目标物体的平面法向量的分布集合时,残差ea的取值为对构建三维形变模型的目标函数进行优化计算得到的满足收敛条件的残差;当分布集合
Figure PCTCN2017076106-appb-000049
为支撑平面的三维法向量的分布集合,残差ea的取值为固定的常数,例如可以为1。
本实施例中通过计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度,以平行度的大小判断待测图像是否为篡改图像:平行度越大,则待测图像为篡改图像的可能性越大。
优选的,本发明还提供了一种在已知待测图像中真实的目标物体和篡改的目标物体的平行度的情况下,计算用于判断待测图像是否为篡改图像的平行度阈值和篡改概率的技术方案。具体地,可以按照下述步骤计算待测图像的篡改概率:
1、获取待测图像内真实的目标物体和篡改的目标物体的平行度概率密度分布。
2、按照下式(9)计算待测图像的篡改概率:
Figure PCTCN2017076106-appb-000050
公式(9)中各参数含义为:y=1表示待测图像为篡改图像,y=0表示待测图像为真实图像;D为待测图像内目标物体与支撑平面之间的平行度;P(y=1|D)表示目标物体的平行度为D时待测图像为篡改图像的概率,f(D|y=1)表示待测图像为篡改图像时平行度D的概率密度,f(D|y=0)表示待测图像为真实图像时平行度D的概率密度。
本实施例中待测图像为篡改图像和真实图像的先验概率相当,即P(y=1)=P(y=0),因此公式(9)可以变换为:
Figure PCTCN2017076106-appb-000051
本实施例中可以比较多个目标物体与支撑平面之间平行度,并结合公式(10)计算得到的各目标物体的篡改概率,综合判断待测图像是否为篡改图像。例如,地面上站立的两个人,并设定一个人所穿的鞋子为鞋子A,另一个所穿的鞋子为鞋子B,人的两只鞋子为目标物体,地面为支撑平面。其中,鞋子A与地面的平行度为0.59°,篡改概率为15.6%;鞋子B与地面的平行度为16.56°,篡改概率为96.9%;两双鞋子之间的平行度为16.63°,篡改概率为96.9%。综合上述数据,可以判断出鞋子B为篡改后的图像,鞋子A为真实的图像,因此待测图像“穿鞋子B的人”是篡改图像。
3、按照公式(10)计算P(y=1|D)=50%时的平行度D50%,并将该平行度D50%作为平行度阈值。当待测图像中目标物体与支撑平面之间的平行度大于平行度阈值时,则认定待测图像为篡改图像;当待测图像中目标物体与支撑平面之间的平行度不大于平行度阈值时,则认定待测图像为真实图像。本实施例中对于上述待测图像“地面上站立的人”,依据公式(10)计算得到平行度D50%=4.61°,由于0.59°<4.61°则鞋子A为真实的图像,16.56°>4.61°则鞋子B为篡改的图像,因此待测图像“地面上站立的人”是篡改图像。
本实施例提供的图像篡改取证方法,基于计算机视觉与图像识别技术,通过检测待测图像内具有平面接触关系的目标物体及支撑平面之间平行度,判断待测图像是否为篡改图像,该方法不依赖于对待测图像中微小图像的统计特征,因此能够对低质量图像进行有效的篡改取证。
上述实施例中虽然将各个步骤按照上述先后次序的方式进行了描述,但是本领域技术人员可以理解,为了实现本实施例的效果,不同的步骤之间不必按照这样的次序执行,其可以同时(并行)执行或以颠倒的次序执行,这些简单的变化都在本发明的保护范围之内。
下面以包含地面、墙壁、天花板、两个站在地面上的人和放置在地面上的水桶的待测图像为例,对本发明提供的图像篡改取证方法进行效果验证,具体为:
1、待测图像包含了多组具有平面接触关系的目标物体和支撑平面,本实施例中选取站在地面上的人的鞋子作为目标物体,地面作为支撑平面,并将待测图像中左侧站立的人的鞋子设定为ID1,右侧站立的人的鞋子设定为ID2,同时水桶和左侧的人为真实的图像,右侧的人为拼接到待测图像的图像。
2、采用人工交互式的拖拽方法标注鞋子ID1和ID2的轮廓点。
3、由于待测图像的天花板上存在多个与地面平行的直线段,本实施例中选取相互垂直的两组直线段,每组直线段包括两个平行的直线段。采用人工交互式的点选方法标注各直线段的各端点。
4、采用网络下载的方式获取CAD软件中已有的鞋子样本的3D样本模型。本发明实施例中鞋子类样本的3D样本模型,主要包括运动鞋的3D样本模型、皮鞋的3D样本模型和休闲鞋的3D样本模型等多种3D样本模型。
5、对所有的3D样本模型进行模型配准,使得各3D样本模型的各顶点语义对应。图8示例性示出了经过语义对应后的3D样本模型,如图所示,第一行为模型配准之前的皮鞋的3D样本模型41、休闲鞋的3D样本模型51和运动鞋的3D样本模型61,第二行为模型配准之后的皮鞋的3D样本模型42、休闲鞋的3D样本模型52和运动鞋的3D样本模型的3D样本模型62。
6、采用主成分分析法对所有经过语义对应的3D样本模型进行分析,得到鞋子类物体的三维形变模型。其中,图4示例性示出了鞋子的三维形变模型示意图。
7、依据步骤2标注的鞋子ID1和ID2的轮廓点,以及步骤6得到的鞋子类物体的三维形变模型,估计鞋子ID1底面的平面法向量的分布集合,和鞋子ID2底面的平面法向量的分布集合。其中,图5示例性示出了依据目标物体ID1拟合三维形变模型得到的三维姿态,如图所示,左侧图像为鞋子ID1中左侧鞋子的三维姿态211,右侧图像为鞋子ID1中右侧鞋子的三维姿态221。图6示例性示出了依据目标物体ID2拟合三维形变模型得到的三维姿态示,如图所示,左侧图像为鞋子ID2中左侧鞋子的三维姿态231,右侧图像为鞋子ID2中右侧鞋子的三维姿态241。图9~14示例性示出了对三维形变模型拟合后得到的三维姿态,如图所示,图9中左侧图像为鞋子ID2中左侧鞋子的初步三维姿态232,右侧图像为鞋子ID2中右侧鞋子的初步三维姿态,图10中左侧图像为鞋子ID2中左侧鞋子的中间三维姿态233,右侧图像为鞋子ID2中右侧鞋子的中间三维姿态243,图11中左侧图像为鞋子ID2中左侧鞋子的最终三维姿态234,右侧图像为鞋子ID2中右侧鞋子的最终三维姿态244。
8、依据步骤3标注的各直线段的端子,估计地面的三维法向量的分布集合。
图7示例性示出了法向量分布集合,如图所示,横纵坐标分别代表法向量的方位角(azimuth)和天顶角(zenith)。鞋子ID1的平面法向量分布71、鞋 子ID2的平面法向量分布72和地面的三维法向量分布73均以点集的形式呈现在二维坐标内,各点集内的一个点即代表一个法向量。
9、依据步骤7和步骤8中得到分布集合,并按照公式(7)和(8)计算的各分布集合之间的在平均方向上的夹角,即可以得到鞋子ID1与鞋子ID2之间、鞋子ID1与地面之间,及鞋子ID2与地面之间的平行度。经过实验数据计算可得鞋子ID1与地面的平行度为0.59°,鞋子ID2与地面的平行度为16.56°,两只鞋子之间的平行度为16.63°,由前述可知平行度越大则待测图像或目标物体为篡改图像的可能性越大,因此综合上述数据,可以判断出鞋子ID2为篡改后的图像,鞋子ID1为真实的图像,该判断结果与步骤1给出的已知结果相同,说明该图像篡改取证方法能够有效检测待测图像是否为篡改图像。
10、根据实验数据获取鞋子ID1与鞋子ID2的平行度概率密度分布,并按照公式(9)和(10)计算待测图像的篡改概率,进而得到平行度阈值。图12示例性示出了真实的目标物体与篡改的目标物体的平行度概率密度分布,如图所示,两条曲线的交点为平行度阈值,也就是篡改概率为50%的点。由公式(10)计算得到平行度阈值D50%=4.61°,通过计算得到待测图像中水桶与地面之间的平行度小于4.61°则判断水桶为真实的图像。
本实施例通过对待测图像中已知的真实目标物体和拼接目标物体进行平行度检测和图像判断,判断结果与已知结果相同,证明了本发明提供的图像篡改取证方法能够有效检测待测图像是否为篡改图像。
基于与方法实施例相同的技术构思,本发明实施例还提供一种图像篡改取证装置。下面结合附图对该图像篡改取证装置进行具体说明。
图3示例性示出了本发明实施例中一种图像篡改取证装置的结构,如图所示,本实施例中图像篡改取证装置可以包括观测线索标注模块11、三维形变模型构建模块12、支撑平面法向量估计模块13、目标物体法向量估计模块14和判断模块15。其中,观测线索标注模块11用于标注待测图像的观测线索;三维形变模型构建模块12用于构建目标物体所属类别物体的三维形变模型;支撑平面法向量估计模块13用于依据观测线索估计所述支撑平面的三维法向量;目标物体法向量估计模块14用于依据观测线索和三维形变模型估计目标物体的三维姿态,进而得到目标物体中与支撑平面接触一侧所在平面的平面法向量;判断模块15用于依据三维法向量和平面法向量,计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度,并依据平行度判断所述待测图像是否为篡改图像。
进一步地,本实施例中观测线索标注模块11还可以用于标注待测图像中目标物体的特征观测点的第一标注单元,以及用于标注待测图像中平行于支撑平面的两个不同方向上的直线段的端点的第二标注单元。
进一步地,本实施例中三维形变模型构建模块12还可以包括模型配准单元和模型构建单元。其中,模型配准单元用于获取多个归类于目标物体所属类别物体的样本的3D样本模型,并对各3D样本模型的各顶点进行语义对应;模型构建单元用于依据所有经过语义对应后的3D样本模型,并采用主成分分析法构建三维形变模型。
进一步地,本实施例中支撑平面法向量估计模块13还可以包括消隐点计算单元、消隐线计算单元和三维法向量计算单元。其中,消隐点计算单元用于采样标注在待测图像中平行于支撑平面的两个不同方向上的各直线段的端点,并依据采样得到的各端点计算待测图像在各方向的消隐点;消隐线计算单元用于依据消隐点的三维齐次坐标,构建所述支撑平面的消隐线方程;三维法向量计算单元,用于依据消隐线方程和相机内参数计算支撑平面的三维法向量。
进一步地,本实施例中目标物体法向量估计模块14还可以包括目标函数构建单元、目标函数优化计算单元和平面法向量计算单元。其中,目标函数构建单元,用于依据所述待测图像中目标物体的特征观测点和所述三维形变模型构建拟合所述三维形变模型的目标函数;目标函数优化计算单元,用于对目标函数进行优化计算,得到优化后的目标物体的三维姿态参数和三维形状参数;平面法向量计算单元,用于依据三维姿态参数计算目标物体中与支撑平面接触一侧所在平面的平面法向量。
进一步地,本实施例中判断模块15还可以包括平行度计算单元,其所包含的平行度计算模型如公式(7)和(8)所示。
优选的,本实施例还提供了一种图像篡改取证装置的优选实施方案,具体为本实施例中图像篡改取证装置还包括平行度阈值计算单元和篡改概率计算单元。其中,篡改概率计算单元的计算模型如公式(9)和(10)所示;平行度阈值计算单元,用于计算篡改概率为50%时对应的平行度D50%,并将该平行度D50%作为平行度阈值。
上述图像篡改取证装置实施例可以用于执行上述图像篡改取证方法实施例,其技术原理、所解决的技术问题及产生的技术效果相似,所属技术领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的图像篡改取证的具 体工作过程及有关说明,可以参考前述图像篡改取证方法实施例中的对应过程,在此不再赘述。
本领域技术人员可以理解,上述图像篡改取证装置还包括一些其他公知结构,例如处理器、控制器、存储器等,其中,存储器包括但不限于随机存储器、闪存、只读存储器、可编程只读存储器、易失性存储器、非易失性存储器、串行存储器、并行存储器或寄存器等,处理器包括但不限于CPLD/FPGA、DSP、ARM处理器、MIPS处理器等,为了不必要地模糊本公开的实施例,这些公知的结构未在图3中示出。
本领域技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的服务器、客户端中的一些或者全部部件的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,PC程序和PC程序产品)。这样的实现本发明的程序可以存储在PC可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一” 或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的PC来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。

Claims (18)

  1. 一种图像篡改取证方法,其特征在于,所述方法包括:
    标注待测图像的观测线索;其中,所述待测图像包括具有平面接触关系的目标物体与支撑平面;
    构建所述目标物体所属类别物体的三维形变模型;
    依据所述观测线索估计所述支撑平面的三维法向量;
    依据所述观测线索和三维形变模型估计所述目标物体的三维姿态,进而得到所述目标物体中与支撑平面接触一侧所在平面的平面法向量;
    依据所述三维法向量和平面法向量,计算所述目标物体与支撑平面之间,和/或多个所述目标物体之间的平行度,并依据所述平行度判断所述待测图像是否为篡改图像;其中,所述平行度为不同平面的法向量的夹角。
  2. 根据权利要求1所述的图像篡改取证方法,其特征在于,所述标注待测图像的观测线索,具体包括:
    标注所述待测图像中目标物体的特征观测点,及标注所述待测图像中平行于支撑平面的两个不同方向上的直线段的端点;
    其中,所述特征观测点包括所述目标物体的轮廓点;所述各方向上的直线段包括多个平行的直线段。
  3. 根据权利要求2所述的一种图像篡改取证方法,其特征在于,
    所述标注目标物体的轮廓点,具体包括:采用人工交互式的鼠标拖拽方法标注所述目标物体的各轮廓点;
    所述标注各直线段的端点,具体包括:
    采用人工交互式的鼠标点选方法标注所述各直线段两端的中心点;
    依据所述各直线段的边缘点模糊程度设定所述各中心点的测量不确定度,并依据所述各测量不确定度标注所述各中心点的分散区域。
  4. 根据权利要求1所述的图像篡改取证方法,其特征在于,所述构建目标物体所属类别物体的三维形变模型,具体包括:
    获取多个归类于所述目标物体所属类别物体的样本的3D样本模型,并对所述各3D样本模型的各顶点进行语义对应;
    依据所有经过语义对应后的3D样本模型,并采用主成分分析法构建所述三维形 变模型。
  5. 根据权利要求4所述的图像篡改取证方法,其特征在于,所述获取多个归类于目标物体所属类别物体的样本的3D样本模型,具体包括:获取预设在制图软件中的3D样本模型,和/或通过3D模型扫描设备获取各样本的3D样本模型。
  6. 根据权利要求4所述的图像篡改取证方法,其特征在于,所述对各3D样本模型的各顶点进行语义对应,具体包括:采用非刚体配准方法对3D样本模型进行语义对应。
  7. 根据权利要求4所述的图像篡改取证方法,其特征在于,所述构建三维形变模型,具体包括:
    依据所述各语义对应后的3D样本模型的三维坐标,构建与各3D样本模型对应的各一维列向量;其中,所述一维列向量的各元素为3D样本模型内各顶点的三维坐标;
    将所有3D样本模型的一维列向量逐列拼接,得到3D样本模型矩阵;
    采用所述主成分分析法分析所述3D样本模型矩阵,得到所述目标物体所属类别物体的三维形变模型。
  8. 根据权利要求1所述的图像篡改取证方法,其特征在于,所述依据观测线索估计支撑平面的三维法向量,具体包括:
    采样标注在待测图像中平行于支撑平面的两个不同方向上的各直线段的端点,并依据采样得到的各端点计算待测图像在所述两个不同方向的消隐点;
    依据所述消隐点的三维齐次坐标,构建所述支撑平面的消隐线方程;其中,所述支撑平面的消隐线为所述消隐点的连接线所在的直线;
    依据所述消隐线方程和相机内参数计算所述支撑平面的三维法向量;
    其中,对所述端点进行多次采样,进而得到多组三维法向量。
  9. 根据权利要求8所述的图像篡改取证方法,其特征在于,所述采样各直线段的端点,具体包括:
    设定所述端点的中心点的二维坐标为均值,所述中心点的测量不确定度为标准差,并采用高斯分布采样方法对各直线段的端点进行采样。
  10. 根据权利要求8所述的图像篡改取证方法,其特征在于,
    所述计算待测图像在各方向的消隐点,具体包括:采用最大似然估计方法计算各方向的消隐点;
    所述构建支撑平面的消隐线方程,具体包括:采用两点式直线方程计算公式构建消隐线方程。
  11. 根据权利要求8所述的图像篡改取证方法,其特征在于,所述三维法向量的计算公式如下式所示:
    n=KTl
    其中,所述n为支撑平面在相机坐标系下的三维法向量,所述K为相机内参数的矩阵,所述T为矩阵转置符号,所述l为消隐线方程。
  12. 根据权利要求1所述的图像篡改取证方法,其特征在于,所述依据观测线索和三维形变模型估计目标物体的三维姿态,具体包括:
    依据所述待测图像中目标物体的特征观测点和所述三维形变模型构建拟合所述三维形变模型的目标函数,并对所述目标函数进行优化计算,得到优化后的所述目标物体的三维姿态参数和三维形状参数;
    其中,对所述目标函数进行多次参数初始化,进而得到多组优化后的三维姿态参数和三维形状参数。
  13. 根据权利要求12所述的图像篡改取证方法,其特征在于,所述目标函数的计算公式如下式所示:
    Figure PCTCN2017076106-appb-100001
    其中,所述N和n分别为所述待测图像中目标物体的特征观测点的总数和序号;所述cn为所述目标物体的第n个特征观测点,所述
    Figure PCTCN2017076106-appb-100002
    为三维形变模型的二维投影的第m个特征观测点;所述
    Figure PCTCN2017076106-appb-100003
    为特征观测点cn与特征观测点
    Figure PCTCN2017076106-appb-100004
    之间的欧式距离的平方;所述θp和θs分别为目标物体的三维姿态参数和三维形状参数;所述θc为相机内参数;
    所述目标函数的约束条件如下式所示:
    |(θs)n|≤kσn
    其中,所述(θs)n为所述目标物体的三维形状参数的第n个分量;所述σn为采用主成分分析法构建三维形变模型时第n个主成分方向的标准差,所述k为预设的常数。
  14. 根据权利要求12所述的图像篡改取证方法,其特征在于,所述对目标函数进行优化计算包括采用迭代最近点算法优化所述目标函数,具体包括:
    获取所述三维形变模型的二维投影的特征观测点中与所述待测图像中目标物体的特征观测点距离最近的各最近点,依据所述待测图像中目标物体的各特征观测点与其对应的各最近点的对应关系修正所述三维形变模型与其二维投影的对应关系;
    对修正后的三维形变模型进行参数优化,并重新修正所述参数优化后的三维形变模型与其二维投影的对应关系,直至所述目标函数的残差满足收敛条件或达到预设的迭代次数;所述参数包括三维姿态参数和三维形状参数。
  15. 根据权利要求12所述的图像篡改取证方法,其特征在于,所述对目标函数进行多次参数初始化,具体包括:
    随机选取以预设参数值为中心的参数分散区域内的多个参数,并将所述的多个参数分别作为对目标函数进行各次优化计算的参数初始值。
  16. 根据权利要求1所述的图像篡改取证方法,其特征在于,所述方法包括按照下式计算目标物体与支撑平面之间,和/或多个目标物体之间的平行度,具体为:
    Figure PCTCN2017076106-appb-100005
    其中,所述
    Figure PCTCN2017076106-appb-100006
    为一个目标物体的平面法向量的分布集合,所述
    Figure PCTCN2017076106-appb-100007
    为所述支撑平面的三维法向量的分布集合或另一个目标物体的平面法向量的分布集合;所述
    Figure PCTCN2017076106-appb-100008
    为分布集仑
    Figure PCTCN2017076106-appb-100009
    与分布集合
    Figure PCTCN2017076106-appb-100010
    的平均万向的夹角;所述p0为分布集合
    Figure PCTCN2017076106-appb-100011
    的加权平均值,所述q0为分布集合
    Figure PCTCN2017076106-appb-100012
    的加权平均值;所述Ang为夹角计算函数;
    任一分布集合
    Figure PCTCN2017076106-appb-100013
    的加权平均值g0的计算公式如下式所示:
    Figure PCTCN2017076106-appb-100014
    其中,所述分布集合
    Figure PCTCN2017076106-appb-100015
    为目标物体的平面法向量的分布集合或支撑平面的三维法 向量的分布集合;所述ga为分布集合
    Figure PCTCN2017076106-appb-100016
    内第a个法向量,所述A为分布集合
    Figure PCTCN2017076106-appb-100017
    内法向量的总数;
    所述ea为第a个法向量ga的残差:当所述分布集合
    Figure PCTCN2017076106-appb-100018
    为目标物体的平面法向量的分布集合时,所述残差ea的取值为对构建三维形变模型的目标函数进行优化计算得到的满足收敛条件的残差;当所述分布集合
    Figure PCTCN2017076106-appb-100019
    为支撑平面的三维法向量的分布集合,所述残差ea的取值为固定的常数。
  17. 根据权利要求1-16任一项所述的图像篡改取证方法,其特征在于,进一步地,所述方法还包括依据所述待测图像内真实的目标物体和篡改的目标物体的平行度概率密度分布,计算用于判断待测图像是否为篡改图像的平行度阈值和篡改概率;具体为:
    所述篡改概率的计算公式如下式所示:
    Figure PCTCN2017076106-appb-100020
    其中,所述y=1表示待测图像为篡改图像,y=0表示待测图像为真实图像;所述D为待测图像内目标物体与支撑平面之间的平行度;所述P(y=1|D)表示目标物体的平行度为D时待测图像为篡改图像的概率,所述f(D|y=1)表示待测图像为篡改图像时平行度D的概率密度,所述f(D|y=0)表示待测图像为真实图像时平行度D的概率密度;所述待测图像为篡改图像和真实图像的先验概率相当;
    所述平行度阈值为篡改概率为50%时对应的平行度D50%
  18. 一种图像篡改取证装置,其特征在于,所述装置包括:
    观测线索标注模块,用于标注待测图像的观测线索;其中,所述待测图像包括具有平面接触关系的目标物体与支撑平面;
    三维形变模型构建模块,用于构建所述目标物体所属类别物体的三维形变模型;
    支撑平面法向量估计模块,用于依据所述观测线索估计所述支撑平面的三维法向量;
    目标物体法向量估计模块,用于依据所述观测线索和三维形变模型估计所述目标物体的三维姿态,进而得到所述目标物体中与支撑平面接触一侧所在平面的平面法向量;
    判断模块,用于依据所述三维法向量和平面法向量,计算所述目标物体与支撑平面之间,和/或多个所述目标物体之间的平行度,并依据所述平行度判断所述待测 图像是否为篡改图像;其中,所述平行度为不同平面的法向量的夹角。
PCT/CN2017/076106 2017-03-09 2017-03-09 图像篡改取证方法及装置 WO2018161298A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/076106 WO2018161298A1 (zh) 2017-03-09 2017-03-09 图像篡改取证方法及装置
US16/336,918 US10600238B2 (en) 2017-03-09 2017-03-09 Image tampering forensics method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/076106 WO2018161298A1 (zh) 2017-03-09 2017-03-09 图像篡改取证方法及装置

Publications (1)

Publication Number Publication Date
WO2018161298A1 true WO2018161298A1 (zh) 2018-09-13

Family

ID=63447102

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/076106 WO2018161298A1 (zh) 2017-03-09 2017-03-09 图像篡改取证方法及装置

Country Status (2)

Country Link
US (1) US10600238B2 (zh)
WO (1) WO2018161298A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465768A (zh) * 2020-11-25 2021-03-09 公安部物证鉴定中心 一种数字图像拼接篡改盲检测方法和系统
US11354797B2 (en) 2019-03-01 2022-06-07 Alibaba Group Holding Limited Method, device, and system for testing an image
CN114820436A (zh) * 2022-03-14 2022-07-29 支付宝(杭州)信息技术有限公司 篡改检测方法、装置、存储介质及电子设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3051951B1 (fr) * 2016-05-27 2018-06-15 Mimi Hearing Technologies GmbH Procede d'elaboration d'un modele deformable en trois dimensions d'un element, et systeme associe
CN109215121A (zh) * 2018-10-23 2019-01-15 百度在线网络技术(北京)有限公司 用于生成信息的方法和装置
EP3871117A4 (en) * 2018-12-07 2022-07-06 Microsoft Technology Licensing, LLC PROVIDING PRIVACY LABELED IMAGES
CN111325113A (zh) * 2020-02-03 2020-06-23 支付宝(杭州)信息技术有限公司 图像检测方法、装置、设备及介质
JP7440332B2 (ja) * 2020-04-21 2024-02-28 株式会社日立製作所 事象解析システムおよび事象解析方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080193031A1 (en) * 2007-02-09 2008-08-14 New Jersey Institute Of Technology Method and apparatus for a natural image model based approach to image/splicing/tampering detection
CN104616297A (zh) * 2015-01-26 2015-05-13 山东省计算中心(国家超级计算济南中心) 一种用于图像篡改取证的改进型sift算法
CN105374027A (zh) * 2015-10-09 2016-03-02 东华大学 一种基于三维重建估计光照方向的图像篡改检测算法
CN105678308A (zh) * 2016-01-12 2016-06-15 中国科学院自动化研究所 一种基于光照方向不一致性的图像拼接检测方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004097828A2 (en) * 2003-04-25 2004-11-11 Thomson Licensing S.A. Marking techniques for tracking pirated media
US20140380477A1 (en) * 2011-12-30 2014-12-25 Beijing Qihoo Technology Company Limited Methods and devices for identifying tampered webpage and inentifying hijacked web address
US9031329B1 (en) * 2012-05-02 2015-05-12 Fourandsix Technologies, Inc. Photo forensics using image signatures
US10032265B2 (en) * 2015-09-02 2018-07-24 Sam Houston State University Exposing inpainting image forgery under combination attacks with hybrid large feature mining

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080193031A1 (en) * 2007-02-09 2008-08-14 New Jersey Institute Of Technology Method and apparatus for a natural image model based approach to image/splicing/tampering detection
CN104616297A (zh) * 2015-01-26 2015-05-13 山东省计算中心(国家超级计算济南中心) 一种用于图像篡改取证的改进型sift算法
CN105374027A (zh) * 2015-10-09 2016-03-02 东华大学 一种基于三维重建估计光照方向的图像篡改检测算法
CN105678308A (zh) * 2016-01-12 2016-06-15 中国科学院自动化研究所 一种基于光照方向不一致性的图像拼接检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI, YUN.: "COPY-MOVE IMAGE FORENSIC ALGORITHM BASED ON SIFT", CHINA MASTER'S THESES FULL-TEXT DATABASE INFORMATION SCIENCE AND TECHNOLOGY, 15 January 2016 (2016-01-15), ISSN: 1674-0246 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11354797B2 (en) 2019-03-01 2022-06-07 Alibaba Group Holding Limited Method, device, and system for testing an image
CN112465768A (zh) * 2020-11-25 2021-03-09 公安部物证鉴定中心 一种数字图像拼接篡改盲检测方法和系统
CN114820436A (zh) * 2022-03-14 2022-07-29 支付宝(杭州)信息技术有限公司 篡改检测方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
US10600238B2 (en) 2020-03-24
US20190228564A1 (en) 2019-07-25

Similar Documents

Publication Publication Date Title
WO2018161298A1 (zh) 图像篡改取证方法及装置
Fan et al. Pothole detection based on disparity transformation and road surface modeling
TWI485650B (zh) 用於多相機校準之方法及配置
US9858472B2 (en) Three-dimensional facial recognition method and system
US10719727B2 (en) Method and system for determining at least one property related to at least part of a real environment
CN107025647B (zh) 图像篡改取证方法及装置
CN105740780B (zh) 人脸活体检测的方法和装置
JP5822322B2 (ja) ローカライズされ、セグメンテーションされた画像のネットワークキャプチャ及び3dディスプレイ
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
KR102028563B1 (ko) 영상 처리를 이용한 발 사이즈 및 모양 측정 방법
AU2013237718A1 (en) Method, apparatus and system for selecting a frame
TWI687689B (zh) 球狀體之旋轉之測定裝置、測定方法以及非暫時性資訊記錄媒體
US10198533B2 (en) Registration of multiple laser scans
CN105934757B (zh) 一种用于检测第一图像的关键点和第二图像的关键点之间的不正确关联关系的方法和装置
CN109670390A (zh) 活体面部识别方法与系统
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
KR101326691B1 (ko) 지역적 특징의 통계적 학습을 통한 강건한 얼굴인식방법
Kagarlitsky et al. Piecewise-consistent color mappings of images acquired under various conditions
CN108492284B (zh) 用于确定图像的透视形状的方法和装置
Peng et al. Image forensics based on planar contact constraints of 3d objects
US20240188688A1 (en) Mehtod and apparatus for processing foot information
Rodrigues et al. 3d modelling and recognition
JP2012234431A (ja) 境界線検出装置、境界線検出方法、及びプログラム
Le et al. Geometry-Based 3D Object Fitting and Localizing in Grasping Aid for Visually Impaired
Nezhinsky et al. Efficient and robust shape retrieval from deformable templates

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17899648

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17899648

Country of ref document: EP

Kind code of ref document: A1