CN115379198A - Camera tampering detection - Google Patents
Camera tampering detection Download PDFInfo
- Publication number
- CN115379198A CN115379198A CN202210477881.0A CN202210477881A CN115379198A CN 115379198 A CN115379198 A CN 115379198A CN 202210477881 A CN202210477881 A CN 202210477881A CN 115379198 A CN115379198 A CN 115379198A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- determining
- vehicle
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title abstract description 17
- 238000000034 method Methods 0.000 claims description 73
- 230000001815 facial effect Effects 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 17
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 230000015654 memory Effects 0.000 abstract description 13
- 230000008569 process Effects 0.000 description 24
- 238000012549 training Methods 0.000 description 15
- 238000013475 authorization Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 238000009826 distribution Methods 0.000 description 11
- 230000000875 corresponding effect Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 239000002784 hot electron Substances 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000005314 correlation function Methods 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 108091008695 photoreceptors Proteins 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- ZLIBICFPKPWGIZ-UHFFFAOYSA-N pyrimethanil Chemical compound CC1=CC(C)=NC(NC=2C=CC=CC=2)=N1 ZLIBICFPKPWGIZ-UHFFFAOYSA-N 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/01—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles operating on vehicle systems or fittings, e.g. on doors, seats or windscreens
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/20—Means to switch the anti-theft system on or off
- B60R25/25—Means to switch the anti-theft system on or off using biometry
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/30—Detection related to theft or to other events relevant to anti-theft systems
- B60R25/305—Detection related to theft or to other events relevant to anti-theft systems using a camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Mechanical Engineering (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The present disclosure provides "camera tampering detection". A computer comprising a processor and a memory, the memory including instructions to be executed by the processor to: determining a center of an object in a first image acquired by a camera; determining a camera noise value for a region in the first image that includes the center; and determining a forgery score by comparing the camera noise value to previously determined camera noise values.
Description
Technical Field
The present disclosure relates to systems and methods for detecting camera tampering in a vehicle.
Background
The computer-based operation performed on the image data may depend on having a verified image data source for a successful operation. Computer-based security operations include facial recognition, a type of biometric authentication in which an image of a person's face is used to determine the identity of a person seeking access to a vehicle, building or device, such as a computer or smartphone. Computer-based vehicle operations may include a computing device in a vehicle acquiring image data from sensors included in the vehicle and processing the image data to determine a vehicle path on which to operate the vehicle. Successful performance of computer-based image processing operations may depend on verifying the image data source.
Disclosure of Invention
The computer-based operation performed on the image data may depend on having a verified image data source for a successful operation. Successful operation is the operation of a computer-based system that achieves the design goals of the system. For example, a biometric authorization system should allow authorized personnel access and deny unauthorized personnel access. Autonomous vehicle systems should operate the vehicle without contacting other vehicles or objects, such as buildings. Successful operation of security tasks such as biometric authorization and vehicle tasks such as autonomous or semi-autonomous operation may depend on verifying that the image data being processed is acquired by a particular camera. For example, an unauthorized user may attempt to gain access to a vehicle, building, or device by replacing images of the authorized user from a source different from the camera included in the security system. In another example, an autonomous vehicle may be forced to travel to an unauthorized location by replacing an image acquired by a camera of the vehicle with an image of a different location. The techniques discussed herein may allow successful operation of a computer-based image processing system by detecting camera tampering, which means unauthorized replacement of camera hardware or image data.
Biometric authentication may be used to control access to a building, house, or vehicle, and may be used to grant permission to operate a computer, cell phone, or other device. The biometric authentication software may be executed on a computing device included in the accessed location or device, or may upload the image data to a cloud-based server that maintains a database of training models for execution. An example of biometric authentication software is facial recognition software, such as a face tracker. The face tracker is a face recognition software library written in C + + and available on facetracker. The biometric authentication software may be executed on a computing device included in the accessed location or device, or may upload the image data to a cloud-based server that maintains a database of training models for execution. The results of performing the biometric authentication may be downloaded to the device seeking authentication and permission to access the vehicle, vehicle controls, areas including buildings or rooms, or devices including computers or cell phones may be granted or denied.
Biometric face recognition typically operates by calculating physiological characteristics of a human face and comparing the calculated physiological characteristics to stored physiological characteristics from a trained model. The physiological characteristics may include measures of facial features such as distance between pupils, distance between corners of the mouth, and nasal length, among others. These metrics may be normalized by forming a ratio of the measurements and stored as a training model. Upon interrogation, an image of the person seeking access is acquired and processed to extract a physiological characteristic, which is then compared to stored physiological characteristics to determine a match. Successful authentication may be used to unlock doors or enable vehicle controls. In other examples, successful authentication may be used for security applications, such as accessing a location or room by unlocking a door, or further alternatively or additionally accessing a device such as a computer by enabling an input device such as a keyboard or mouse or granting access to a file.
As biometric authentication technology has advanced, so has the technology for tampering with camera data to spoof a biometric authentication system to authenticate fraudulent counterfeiting (fraud). For example, neural networks can be used to generate "deep-forged" images and videos, where the image of one person can be edited onto the body of another person, the look of one person can be transplanted into a scene they never live in, or one person can appear as if they were saying that they never spoken in real life. Counterfeit images determined based on skilled manual image processing or depth counterfeiting techniques can spoof biometric authorization systems based on facial recognition, thereby granting unauthorized persons access to vehicles, vehicle controls, buildings, computing devices, or cell phones.
The vehicle may operate in an autonomous ("autonomous" by itself is meant in this disclosure as "fully autonomous") mode, a semi-autonomous mode, and a passenger driving (also referred to as non-autonomous) mode. A computing device in a vehicle may receive image data regarding vehicle operation from vehicle sensors including a camera. The computing device may operate the vehicle in an autonomous mode, a semi-autonomous mode, or a non-autonomous mode based on processing image data acquired by vehicle sensors including cameras. A computing device in a vehicle may use image data to detect objects in the vehicle's surroundings, such as other vehicles, pedestrians, traffic signs or obstacles, etc. The image processing software included in the computing device may be dependent on receiving data that includes a desired pixel intensity for proper operation of the image processing software. For example, camera tampering (such as authorizing replacement of a desired camera with a different camera or replacement of video data from another source) may cause the image processing software to yield incorrect results, such as a pedestrian not being detected in the field of view of the camera. Detecting camera tampering can detect unauthorized data introduced into the vehicle operating system.
The techniques discussed herein detect camera tampering by identifying deviations from inherent properties of image data acquired by a given camera. The intrinsic property of the image data includes a noise distribution in the image data. The techniques discussed herein can detect visual fraudulent counterfeiting while advantageously using sufficient efficient computing resources to run in real time on a vehicle computing system. The techniques discussed herein use camera noise characterization to create a binary classification of untampered images from tampered images. For example, a correlation noise profile may be determined based on a photo response non-uniformity (PRNU). The PRNU may be used to create a desired noise "fingerprint" to identify an image that includes a particular person. Although machine learning counterfeiting algorithms can be visually deceptive, the noisy "fingerprint" is still inherent to the imager and the portion of the image that includes the face. Techniques discussed herein improve vehicle biometric authentication by detecting camera tampering, particularly camera tampering that includes a forged image of a person.
Another technique for determining a camera noise "fingerprint" is to measure camera dark current noise. Camera dark current noise is a measure of the amount of current produced by a camera pixel without any light stimulus falling on the pixel. A pixel is the portion of the camera sensor that converts incident light into current. Dark current noise is thermal noise caused by electrons that are spontaneously generated within the camera sensor due to thermal excitation of valence electrons into the conduction band. The variation in the number of dark electrons collected during exposure is dark current noise. The dark current noise is independent of the signal level but depends on the temperature of the sensor. Each pixel included in the sensor may have a characteristic level of dark current noise that may be combined to identify a particular camera.
Camera dark current noise data can be acquired by masking a small number of sensor pixels, typically along one or more edges of the sensor, to ensure that no light falls on the pixels. When an image from the camera is acquired, the masked pixels will appear as dark pixels in the image. By acquiring a long exposure image, sufficient hot electrons will be generated in the masked portion of the image to allow dark current noise to be measured. Because hot electrons obey poisson statistics, the dark current noise corresponding to the current generated by hot electrons is the square root of the pixel value. The current generated by the hot electrons is also a function of temperature, so the temperature of the sensor can be measured to determine a temperature factor to be multiplied by the pixel value to compensate for the temperature. The resulting dark current noise values of the masked pixels may be combined for comparison with subsequently acquired dark current noise values or retained as a pixel array for correlation with subsequently acquired dark current noise values.
A counterfeit image of a person may be made by obtaining an image of the person authorized to access the vehicle from a different camera source than the camera included in the vehicle. In this context, presenting a counterfeit image to the biometric authorization system may be mentioned when tampering with the camera. The techniques discussed herein may improve a biometric authorization system by determining one or more distributions of image noise in a portion of an image including a human face to determine camera tampering. For example, when camera tampering is detected, the techniques discussed herein will prevent the output image data from being used for biometric authentication. The techniques disclosed herein may also determine a forgery score that indicates a probability that an image presented to the biometric authorization system was acquired by the same camera that was previously used to acquire the enrollment image. The forgery score can be multiplied by a score output from a subsequent biometric authentication system to determine an overall probability that the person in the challenge image corresponds to the same person in the enrollment image.
The techniques discussed herein may determine camera tampering by processing an image from a camera to determine one or more camera noise value distributions for regions of the image and comparing the distribution of camera noise values to a previously determined distribution of camera noise values based on images previously acquired from the same camera. As discussed above, camera noise may be measured by PRNU or dark current noise. The PRNU measures the fixed pattern noise of the camera, where the pattern point noise is a measure of the response of the pixels to a uniform light stimulus. A PRNU may be determined for each pixel corresponding to a photoreceptor in the camera. The light sensor may be optically filtered using red, green or blue mosaic filters, or may be a visible or near-infrared grayscale pixel. Existing techniques for detecting camera tampering include machine learning techniques or data driven techniques. Machine learning techniques can detect camera tampering by analyzing images to detect blurriness on the edges of tampered areas using, for example, a deep neural network trained for this purpose. Data driven technology forms a database of tampered images and attempts to detect camera tampering by matching images to images in the tampered image database. The PRNU distribution comparison may be a more reliable camera tampering metric than a machine learning technique or a data driven technique. PRNU or dark current matching may also be determined using fewer computational resources than machine learning techniques or data driven techniques.
Vehicles, buildings, and devices may be equipped with computing devices, networks, sensors, and controllers to acquire and/or process data about the environment and allow access to the vehicle based on the data. For example, a camera in a vehicle or building may be programmed to acquire an image close to the user and, upon determining the identity of the user based on facial recognition software, unlock the door to allow the user access. Similarly, a camera included inside the vehicle or device may acquire one or more images of the user and, upon determining the identity of the user based on facial recognition software, accept commands from a human to operate the vehicle or device.
Drawings
FIG. 1 is a diagram of an exemplary vehicle.
Fig. 2 is a diagram of an exemplary photo-responsive non-uniformity process.
Fig. 3 is a diagram of an exemplary image with regions.
Fig. 4 is a diagram of an exemplary image in which a human face is detected.
Fig. 5 is a diagram of an exemplary image in which a single region is selected.
FIG. 6 is a flow diagram of an exemplary process for determining camera tampering.
Detailed Description
Fig. 1 is a diagram of a vehicle 110 including a computing device 115 and sensors 116. The computing device (or computer) 115 includes a processor and memory such as is known. Further, the memory includes one or more forms of computer-readable media and stores instructions that are executable by the processor to perform various operations, including as disclosed herein. For example, the computing device 115 may include programming to operate one or more of vehicle braking, propulsion (e.g., controlling acceleration of the vehicle 110 by controlling one or more of an internal combustion engine, an electric motor, a hybrid engine, etc.), steering, climate control, interior and/or exterior lights, etc., and to determine whether and when the computing device 115 (rather than a human operator) controls such operations.
The computing device 115 may transmit and/or receive messages to and/or from various devices in the vehicle (e.g., controllers, actuators, sensors, including sensor 116, etc.) via the vehicle network. Alternatively or additionally, where computing device 115 actually includes multiple devices, a vehicle communication network may be used to communicate between devices represented in this disclosure as computing device 115. Additionally, as mentioned below, various controllers or sensing elements (such as sensors 116) may provide data to the computing device 115 via a vehicle communication network.
Further, the computing device 115 may be configured to communicate with a remote server computer (e.g., a cloud server) via a network through a vehicle-to-infrastructure (V-to-I) interface 111, which, as described below, includes hardware, firmware, and software that permits the computing device 115 to communicate via, for example, a wireless internet networkOr a network of cellular networks, in communication with a remote server computer. Thus, the V-to-I interface 111 may include a V-to-I interface configured to utilize various wired and/or wireless networking technologies (e.g., cellular, broadband, or the like),Ultra Wideband (UWB),And wired and/or wireless packet networks), processors, memories, transceivers, etc. The computing device 115 may be configured to use, for example, a web page specifically formed between neighboring vehicles 110 or by basing the web page on a baseThe vehicle-to-vehicle (V-to-V) network formed by the network of facilities communicates with other vehicles 110 (e.g., in accordance with Dedicated Short Range Communications (DSRC) and/or the like) through V-to-I interfaces 111. The computing device 115 also includes non-volatile memory such as is known. The computing device 115 may record the data by storing the data in non-volatile memory for later retrieval and transmission to a server computer or user mobile device via a vehicle communication network and vehicle-to-infrastructure (V-to-I) interface 111.
As already mentioned, programming for operating one or more vehicle 110 components (e.g., braking, steering, propulsion, etc.) without human operator intervention is typically included in instructions stored in a memory and executable by a processor of computing device 115. Using data received in computing device 115 (e.g., sensor data from sensors 116, server computers, etc.), computing device 115 may make various determinations and/or control various vehicle 110 components and/or operations. For example, the computing device 115 may include programming to adjust vehicle 110 operating behaviors (i.e., physical manifestations of vehicle 110 operation) such as speed, acceleration, deceleration, steering, etc., as well as strategic behaviors (i.e., operating behaviors are typically controlled in a manner intended to achieve safe and effective traversal of a route), such as distance between vehicles and/or amount of time between vehicles, lane changes, minimum gaps between vehicles, left turn crossing path minima, arrival time at a particular location, and intersection (no-signal) minimum time from arrival to crossing an intersection.
The one or more controllers 112, 113, 114 for the vehicle 110 may include known Electronic Control Units (ECUs) and the like, including, by way of non-limiting example, one or more powertrain controllers 112, one or more brake controllers 113, and one or more steering controllers 114. Each of the controllers 112, 113, 114 may include a respective processor and memory and one or more actuators. The controllers 112, 113, 114 may be programmed and connected to a vehicle 110 communication bus, such as a Controller Area Network (CAN) bus or a Local Interconnect Network (LIN) bus, to receive instructions from the computing device 115 and control actuators based on the instructions.
The sensors 116 may include various devices known to share data via a vehicle communication bus. For example, a radar fixed to a front bumper (not shown) of vehicle 110 may provide a distance from vehicle 110 to the next vehicle in front of vehicle 110, or a Global Positioning System (GPS) sensor disposed in vehicle 110 may provide geographic coordinates of vehicle 110. The range provided by the radar and/or other sensors 116 and/or the geographic coordinates provided by the GPS sensors may be used by the computing device 115 to operate the vehicle 110.
The vehicle 110 is typically a land-based vehicle 110 (e.g., passenger car, light truck, etc.) that is operable and has three or more wheels. Vehicle 110 includes one or more sensors 116, V-to-I interface 111, computing device 115, and one or more controllers 112, 113, 114. Sensors 116 may collect data related to vehicle 110 and the operating environment of vehicle 110. By way of example and not limitation, sensors 116 may include, for example, altimeters, cameras, lidar, radar, ultrasonic sensors, infrared sensors, pressure sensors, accelerometers, gyroscopes, temperature sensors, pressure sensors, hall sensors, optical sensors, voltage sensors, current sensors, mechanical sensors (such as switches), and the like. The sensors 116 may be used to sense the operating environment of the vehicle 110, for example, the sensors 116 may detect phenomena such as weather conditions (rain, outside ambient temperature, etc.), road grade, road location (e.g., using road edges, lane markings, etc.), or the location of a target object, such as a neighboring vehicle 110. The sensors 116 may also be used to collect data, including dynamic vehicle 110 data related to the operation of the vehicle 110, such as speed, yaw rate, steering angle, engine speed, brake pressure, oil pressure, power levels applied to the controllers 112, 113, 114 in the vehicle 110, connectivity between components, and accurate and timely execution of components of the vehicle 110.
Fig. 2 is a diagram of an example PRNU determination. May be based on the number of images acquired by a sensor 116 included in a vehicle, building or device such as a computer or cell phone and transmitted to a computing device 115 for processingA PRNU for the image data is determined accordingly. One or more images 200 may be acquired from the image sensor 116; in the example of fig. 2, there are four images 202, 204, 206, 208 that are acquired by the sensors 116 included in the vehicle and input to a PRNU block 210, which may be a software program executing on the computing device 115. The PRNU block 210 determines an unbiased estimate of each pixel location in the input one or more images 200 according to the following equation
Wherein I i Is the pixel value of the ith image, and W i By filtering the ith image with a de-noising filter, such as a wavelet filter or smoothing filter, to form a filtered image I i (0) And then from the unfiltered image I i Subtracting the filtered image I i (0) To form a noise residual W i =I i -I i (0) To determine the noise residual of the ith image. By first determining an unbiased estimate of each pixel value and each pixel location in each of the one or more images 200The PRNU is calculated from the difference between. Statistics based on the distribution of differences may be determined for each pixel location common to one or more images 200, thereby determining one or more new images having the same size as the one or more images 200 for each statistical metric applied to the difference.
Based on pixel values and unbiased estimatesThe image of PRNU values may include measures of mean, variance, skewness, and kurtosis for each pixel location. For each pixel position, PThe first metric of the RNU value is the pixel value and the unbiased estimateThe mean of the differences between. The second metric is the first moment about the mean or variance. Variance is a measure of the width of the distribution with respect to the mean. The third measure is skewness. Skewness is a measure of how symmetric a distribution is about a mean. The fourth metric is kurtosis. Kurtosis is a measure of the degree of flatness or peaking of the distribution. One to four PRNU images can be determined based on how much precision is required in the counterfeit detection process. For example, PRNU values corresponding to the mean and variance can be used to detect an image of a photograph of a human face from a real image of the human face. Four PRNU images, including skewness and kurtosis, may be needed to detect the correct image of a real face acquired with different makes and models of cameras.
The PRNU block 210 outputs one or more PRNU images corresponding to the mean, variance skewness, and kurtosis. The mean, variance, skewness, and kurtosis may also be calculated for each color in a red, green, blue (RGB) image or grayscale image. The correct calculation of mean, variance, skewness and kurtosis should be performed on the raw pixel data without averaging or combining the pixels. Calculating PRNU values for variance, skewness, and kurtosis for each color in the RGB image may produce twelve output images from PRNU block 210. Using unbiased estimatesAn advantage of calculating the PRNU value is that the PRNU value can be determined without presenting one or more predetermined targets (such as a sinusoidal pattern at a predetermined illumination level) to the camera to calculate the PRNU value. The PRNU value is a measure of fixed point noise in an image, where fixed point noise is the change in pixel values from one image to the next at a particular location in an image determined by acquiring more than one image of the same scene. The mean, variance, skewness, and kurtosis may also be determined for the camera dark current noise, and the camera dark current noise statistics may be compared using the same techniques used to compare the PRNU values.
Fig. 3 is a diagram of an exemplary image 300 captured by a camera included in vehicle 110 and transmitted to computing device 115. The image 300 is presented in grayscale to comply with patent office regulations, but may be a grayscale image or an RGB image. The image 300 may be divided into a region b 302 and PRNU values calculated for the region b 302 that includes an object of interest (e.g., a human face). Calculating a region Z comprising a human face rc The PRNU value of 302 may identify both the camera that acquired the image and the face in the image. Both cameras and faces introduce fixed pattern noise, which may be determined by PRNU values including the mean, variance, skewness, and kurtosis of the distribution of fixed point noise for pixels in an image. Comparing the PRNU value to a previously acquired PRNU value can determine whether the face in the current image is the same face in an image previously acquired by the same camera. The PRNU value may be compared by correlating the PRNU value with a previously acquired PRNU value to determine a correlation coefficient. Up to four PRNU values may be determined for one or more grayscale images. In examples where four PRNU values are determined for each of the red, green, or blue channels of an RGB color image, up to 12 PRNU values may be determined.
After up to 12 images including PRNU values are computed, correlation may be used to compare the image including the PRNU values with the image of PRNU values obtained at registration. Registration in this context refers to determining a camera noise profile by calculating the PRNU value for a given camera. The registration PRNU value can be compared to a challenge PRNU value obtained later to determine camera tampering by correlation of the two sets of PRNU values. The correlation is an operation of multiplying each number of the first images by each number of the second images and summing while moving the center of the first image relative to the center of the second image. After multiplication and summation, the maximum value may be normalized and selected as the correlation coefficient. One or more correlation coefficients may be averaged to determine an overall correlation coefficient p. The counterfeit score F can be determined from the following calculation:
f =1- α × ρ (enrollment, challenge) (2)
Wherein an image including the PRNU value from the registration image is registered, and an image including the PRNU value from the challenge image is challenged. ρ () is the correlation function discussed above, and α is a scalar constant that can be empirically determined by acquiring and testing multiple real and counterfeit images.
Determining camera tampering by calculating a forgery score F of an image from a camera can be improved by reducing the resolution of the image. By performing 1/2 by 1/2 down-sampling on the image using every other pixel in the x and y directions to form an image with reduced resolution, the resolution of the image can be reduced without reducing the accuracy of the forgery score F. This results in an image with a 1/4 pixel count, thereby reducing the number of calculations required to determine the forgery score F by three times. Depending on the amount of image noise present, the image can be downsampled to 1/16 of the original number of pixels (1/4 x1/4 downsampling) without changing the forgery score F. The amount of downsampling to be used may be determined empirically by comparing the downsampled forgery scores F of the original image and the downsampled image to determine which level of downsampling to employ for a given camera. The techniques discussed herein may improve camera tampering detection by downsampling image data to reduce the computation time and resources required to compute the forgery score F.
Fig. 4 is a diagram of an image 400 captured by a camera included in vehicle 110 and transmitted to computing device 115. The image 400 is presented in grayscale to comply with patent office regulations, but may be a grayscale image or an RGB image. The techniques discussed herein can detect a counterfeit image by first detecting the outline of a human face 402 in an image 400 using image processing software. Image processing software 402 that can determine face contours is included in the Dlib (i.e., a toolkit containing machine learning algorithms and tools for creating complex software in C + +). Com, and is available under an open source license that allows it to be used free of charge. Dlib includes a program called "acquire _ front _ face _ detector" which is a program configured to find faces more or less towards the camera. A computing device 115 executing a program such as "acquire _ front _ face _ detector" may detect the outline of a human face 402 and the center 404 of the face in the image 400.
Fig. 5 is a diagram of an image 500 captured by a camera included in vehicle 110 and transmitted to computing device 115. Image 500 is presented in grayscale to comply with patent office regulations, but may be a grayscale image or an RGB image. The image 500 has been masked by determining the center 504 of a face 502 that is included in the image 500 and detected using a procedure such as the "acquire _ front _ face _ detector" described above with respect to fig. 4. The computing device 115 may determine the zone Z including the center 504 of the face 502 by determining the region Z rc 506 and connect zone Z rc All pixel values except 506 are set to zero to mask the image 500. The counterfeit score F of the image 500 may be calculated by calculating the zone Z rc 506, of pixels within the image. The camera noise score may be processed as described with respect to fig. 4 and 5 based on processing to include only the center of the image or face and the region Z including the center of the face rc The registered image of the corresponding pixel is correlated with the camera noise score.
After determining the forgery score F of the image 500, the image 500 may be communicated to a biometric authorization system executing on the computing device 115. The biometric authorization program may include facial recognition software. The facial recognition software may determine two sets of facial features corresponding to the challenge image and the enrollment image and determine a distance ratio between the features. The facial recognition software may determine a facial recognition score by determining a match value to previously determined facial recognition features. The facial recognition score may be combined with the forgery score to determine a forgery confidence score. The user authentication state may be determined by comparing the forgery confidence score to a threshold. The threshold may be determined empirically by: the plurality of real images and counterfeit images 500 are acquired, the counterfeit scores F and face recognition scores of the real images and counterfeit images 500 are determined, and the threshold value is determined based on the counterfeit confidence scores of the plurality of real images and counterfeit images 500.
Facial features include locations on the facial image, such as the inner and outer corners of the eye and the corners of the mouth. For example, a facial feature detection program (such as SURF in the Dlib image processing library) may determine a location on the face corresponding to a facial feature (such as the center of each eye and the center of the mouth). The facial recognition software may compare the ratios based on the two sets of features and determine a match value. If the ratios between the feature sets match, meaning that they have the same values within an empirically determined tolerance, then the person in the challenge image is determined to be the same person as the person in the previously acquired enrollment image. The match value can be determined by determining the mean square error between the two sets of ratios. Matching the distance ratio may reduce the variance of the facial feature measurements due to differences in distance from the camera and differences in pose between the two images. The face recognition score may be determined using an equation similar to equation (2) above, replacing the correlation function with a match value, and determining a value that maps the match score to the interval (0, 1) alpha, where a value close to 1 corresponds to a good match and a value close to 0 corresponds to a bad match.
A forgery confidence score for biometric authorization may be determined by multiplying the forgery score by the facial recognition score. The forged confidence score may be used to determine a user authentication state. A counterfeit confidence score greater than the threshold may indicate that the challenge image may not be counterfeit and that the challenge matches well with the enrollment image, and thus the user authentication status should be "authenticated" and the user may be granted access to the vehicle, building, or device. A counterfeit confidence score less than the threshold value may indicate that the challenge image is likely counterfeit or that the facial recognition score indicates that the challenge image does not match the registered image, or both, and thus the user authentication status should be set to "unauthenticated" and the user should not be granted access to the vehicle, building, or device. A forgery confidence score less than a threshold value may indicate a problem with a forgery score or a face recognition score, i.e., a forged image passes face recognition, or a genuine image fails face recognition. In either case, access to the vehicle, area, or computer should be denied. In one example, an exemplary threshold for determining a successful counterfeit confidence score is 0.5, and may typically be determined empirically based on testing the system with multiple real and counterfeit images.
As described herein, combining a forgery score and a facial recognition score to determine a forgery confidence score can improve biometric authorization by detecting forged images that can successfully pass biometric authorization. Determining a forgery score based on a single image region that includes the center of the face allows the forgery score to be determined with significantly less computer resources than prior techniques that required processing of the entire image to determine camera tampering.
The counterfeit score F may also be used to determine camera tampering of the image 500 for operating the vehicle 110. As discussed above with respect to fig. 1, computing device 115 may operate vehicle 110 using images 500 from cameras included in vehicle 110. Camera tampering may cause image processing software included in the computing device 115 to output erroneous results. For example, changing a camera in a vehicle to a different camera having a different resolution and response to light may cause the pixel values corresponding to an object in the image to change enough to prevent the image processing software from detecting the object. In other examples, image data from a different camera may be replaced with image data from the original camera for malicious purposes, for example, to contact vehicle 110 with an object. Detecting camera tampering by determining the forgery score F can prevent erroneous operation of the vehicle 110 due to camera tampering.
The techniques discussed herein with respect to camera tampering detection may be subject to reinforcement learning. Reinforcement learning is performed by: maintaining statistics on the number of correct and incorrect results achieved by the camera tamper detection system in use and using the statistics to retrain the camera tamper detection system. For example, assume that a camera tampering detection system is used as an input to a biometric authorization system for unlocking a vehicle, building or device upon approach of a valid user. A valid user is a user who has a pre-arranged permission to use a vehicle, building or device. In examples where the camera tamper detection system fails to properly authenticate the camera and unlock the vehicle, the user may be forced to unlock the vehicle manually with a key or key fob, or use a two-factor authorization system, such as entering a code sent to a cell phone number. When a user is forced to manually unlock the vehicle, the camera tamper detection system may store data regarding incorrect camera source data, including an image of the user.
Determining how to process data regarding incorrect camera tampering detection may be based on a reward system. The reward system retrains the camera tampering detection system corresponding to the camera tampering detection data according to the result of the authentication failure. If the potential user does not gain access to the vehicle, it is assumed that the failed attempt was a spoof attempted, and the data is appended to a training data set of potentially spoofed data. If a potential user gains access using one of the manual methods (e.g., key fob, or two-factor authorization), the data is appended to a false negative training data set that will be corrected during the training process. The certification system may be retrained periodically based on an updated training data set, or when the number of new camera tampering detection data sets added to the training data set exceeds a user-determined threshold. Retraining is applicable to both gaussian parameter-based deterministic certification systems and deep neural network-based systems.
Data regarding failure to verify camera tampering may be federated or shared among multiple vehicles. Data regarding verification camera tampering failures can be uploaded to a cloud-based server that includes a central repository of training data sets. The uploaded data and corresponding results for validating the camera source data set may be aggregated in an updated training data set, and the results retrained based on the new data may be compared to the results of the previous training. If the new training data set improves performance, the new training model may be pushed or downloaded to the vehicle using the camera tampering detection system. It should be noted that there is no need to upload personal data about the user's identity to a cloud-based server, only the camera source is needed to verify the data set and results. By federating new training models based on training data uploaded from multiple locations, the performance of the camera tampering detection system can be continuously improved over the lifecycle of the system.
Fig. 6 is a diagram of a flow chart of a process 600 for detecting counterfeit images described with respect to fig. 1-4. Process 600 may be implemented by a processor of a computing device, such as computing device 115, for example, taking information from sensors as input and executing commands and outputting object information. Process 600 includes a number of blocks that may be performed in the order shown. Alternatively or additionally, process 600 may include fewer blocks, or may include blocks performed in a different order.
The process 600 begins at block 602, where the computing device 115 determines a center of an object in a first image acquired by a camera included in the vehicle 110. The object may be a face and the computing device 115 may locate the center of the face and determine an image region that includes the center of the object, as discussed above with respect to fig. 3 and 4.
At block 604, the computing device 115 determines a forgery score for the region of the first image that includes the center of the face determined at block 602. The forgery score may be determined by determining a camera noise value for a first image or challenge image as discussed above with respect to fig. 2, and then correlating the determined camera noise value with a stored camera noise value for a second image or registration image previously acquired using the same camera that acquired the first image, as discussed above with respect to fig. 5.
At block 606, the computing device 115 processes the first image or challenge image to determine a facial recognition score by extracting facial features from the first image or challenge image and comparing the facial features to facial features previously determined from the enrollment image using facial recognition software, as discussed above with respect to fig. 5.
At block 608, the computing device 115 may combine the forgery scores by multiplying the scores by the facial recognition scores to determine a forgery confidence score.
At block 610, the computing device 115 compares the total score to a threshold to determine whether the user authentication status should be set as authenticated or unauthenticated. When the total score is greater than the threshold, the user authentication status is authenticated and process 600 passes to block 612. When the total score is less than or equal to the threshold, the user authentication status is not authenticated and the process 600 passes to block 614.
At block 612, if the total score is greater than the threshold, it may be determined that the first image or challenge image is legitimate, i.e., not counterfeit, because the face in the first image or challenge image matches the face in the registered image. Thus, the user authentication state is authenticated and user access to the vehicle, vehicle controls, building, room, or device protected by the biometric authentication system is granted. After block 612, the process 600 ends.
At block 614, if the total score is less than or equal to the threshold, it may be determined that the first image or the challenge image is likely to be counterfeit, or that the face in the first image or the challenge image does not match the face in the registered image, or both. Thus, the user authentication state is authenticated and user access to the vehicle, vehicle controls, building, room, or device protected by the biometric authentication system is granted. After block 614, the process 600 ends.
Computing devices such as those discussed herein typically each include commands that are executable by one or more computing devices such as those identified above and for implementing the blocks or steps of the processes described above. For example, the process blocks discussed above may be embodied as computer-executable commands.
The computer-executable commands may be compiled or interpreted from computer programs created using a variety of programming languages and/or techniques, including but not limited to: java, C + +, python, julia, SCALA, visual Basic, java Script, perl, HTML, etc. In general, a processor (e.g., a microprocessor) receives commands, e.g., from a memory, a computer-readable medium, etc., and executes the commands, thereby performing one or more processes, including one or more of the processes described herein. Such commands and other data may be stored in files and transmitted using a variety of computer-readable media. A file in a computing device is typically a collection of data stored on a computer-readable medium, such as a storage medium, random access memory, or the like.
Computer-readable media includes any medium that participates in providing data (e.g., commands) that may be read by a computer. Such a medium may take many forms, including but not limited to, non-volatile media, and the like. Non-volatile media includes, for example, optical or magnetic disks and other persistent memory. Volatile media includes Dynamic Random Access Memory (DRAM), which typically constitutes a main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
Unless expressly indicated to the contrary herein, all terms used in the claims are intended to be given their ordinary and customary meaning as understood by those skilled in the art. In particular, the use of singular articles such as "a," "the," "said," etc. should be read to recite one or more of the indicated elements unless a claim recites an explicit limitation to the contrary.
The term "exemplary" is used herein in a sense that it represents an example, e.g., a reference to "exemplary widget" should be read to refer only to an example of a widget.
The adverb "about" modifying a value or result means that the shape, structure, measured value, measurement, calculation, etc. may deviate from the geometry, distance, measured value, measurement, calculation, etc. described exactly due to imperfections in materials, machining, manufacturing, sensor measurements, calculations, processing time, communication time, etc.
In the drawings, like numbering represents like elements. In addition, some or all of these elements may be changed. With respect to the media, processes, systems, methods, etc., described herein, it should be understood that, although the steps or blocks of such processes, etc., have been described as occurring according to a particular, ordered sequence, such processes may be practiced by performing the described steps in an order other than the order described herein. It should also be understood that certain steps may be performed simultaneously, that other steps may be added, or that certain steps described herein may be omitted. In other words, the description of processes herein is provided for the purpose of illustrating certain embodiments and should in no way be construed as limiting the claimed invention.
According to the present invention, there is provided a computer having: a processor; and a memory including instructions executable by the processor to: determining a center of an object in a first image acquired by a camera; determining a camera noise value for a region in the first image that includes the center; and determining a forgery score by comparing the camera noise value to a previously determined camera noise value.
According to one embodiment, the instructions comprise further instructions for: determining a facial recognition score by determining a match value to a previously determined facial recognition feature; combining the forgery score with the facial recognition score to determine a forgery confidence score; and determining a user authentication status by comparing the forgery confidence score to a threshold.
According to one embodiment, the instructions comprise further instructions for: granting access to at least one of a vehicle, a vehicle control, an area including a building or room, or a device including a computer or cell phone when the user authentication status is authenticated.
According to one embodiment, the instructions comprise further instructions for: operating the vehicle based on the counterfeit score.
According to one embodiment, the instructions comprise further instructions for: determining the previously determined camera noise value by processing a previously acquired image of the object.
According to one embodiment, the instructions comprise further instructions for: determining the counterfeit score by correlating the camera noise value for the region in the first image with a previously determined camera noise value.
According to one embodiment, the instructions comprise further instructions for: determining the previously determined camera noise value by processing a second image acquired by the camera that includes the object.
According to one embodiment, the instructions comprise further instructions for: determining the camera noise value to include an equationDetermined valueIn the photoresponse nonuniformity, where I i Is the i-th image acquired by the camera and W i Is the noise residual of the ith image.
According to one embodiment, the instructions comprise further instructions for: determining the camera noise value as dark current noise.
According to one embodiment, the camera noise values comprise one or more of a mean, a variance, a skewness, and a kurtosis.
According to one embodiment, the camera noise values comprise one or more of camera noise values of a red channel, a green channel, or a blue channel of the first image.
According to one embodiment, the instructions comprise further instructions for: determining the center of the object with image processing software to detect a contour of a human face.
According to one embodiment, the instructions comprise further instructions for: determining the camera noise value in a region including the center of the object.
According to the invention, a method comprises: determining a center of an object in a first image acquired by a camera; determining a camera noise value for a region in the first image that includes the center; and determining a forgery score by comparing the camera noise value to a previously determined camera noise value.
In one aspect of the invention, the method comprises: determining a facial recognition score by determining a match value with a previously determined facial recognition feature; combining the forgery score with the facial recognition score to determine a forgery confidence score; and determining a user authentication status by comparing the counterfeit confidence score to a threshold.
In one aspect of the invention, the instructions include further instructions for: granting access to at least one of a vehicle, a vehicle control, an area including a building or room, or a device including a computer or cell phone when the user authentication status is authenticated.
In one aspect of the invention, the instructions include further instructions for: operating the vehicle based on the counterfeit score.
In one aspect of the invention, the method comprises: determining the previously determined camera noise value by processing a previously acquired image of the object.
In one aspect of the invention, the method comprises: determining the forgery score by correlating the camera noise value of the region in the first image with a previously determined camera noise value.
In one aspect of the invention, the method comprises: determining the previously determined camera noise value by processing a second image acquired by the camera that includes the object.
Claims (15)
1. A method, comprising:
determining a center of an object in a first image acquired by a camera;
determining a camera noise value for a region in the first image that includes the center; and
determining a forgery score by comparing the camera noise value to a previously determined camera noise value.
2. The method of claim 1, further comprising:
determining a facial recognition score by determining a match value with a previously determined facial recognition feature;
combining the forgery score with the facial recognition score to determine a forgery confidence score; and
determining a user authentication status by comparing the counterfeit confidence score to a threshold.
3. The method of claim 2, the instructions comprising further instructions to: granting access to at least one of a vehicle, a vehicle control, an area including a building or room, or a device including a computer or cell phone when the user authentication status is authenticated.
4. The method of claim 1, the instructions comprising further instructions to: operating the vehicle based on the counterfeit score.
5. The method of claim 1, further comprising: determining the previously determined camera noise value by processing a previously acquired image of the object.
6. The method of claim 1, further comprising: determining the forgery score by correlating the camera noise value of the region in the first image with a previously determined camera noise value.
7. The method of claim 1, further comprising: determining the previously determined camera noise value by processing a second image acquired by the camera that includes the object.
8. The method of claim 1, further comprising: determining the previously determined camera noise value by processing a second image acquired by the camera that includes the object.
10. The method of claim 1, further comprising determining the camera noise value as dark current noise.
11. The method of claim 1, wherein the camera noise values comprise one or more of a mean, a variance, a skewness, and a kurtosis.
12. The method of claim 1, wherein the camera noise values comprise one or more of camera noise values of a red channel, a green channel, or a blue channel of the first image.
13. The method of claim 1, further comprising: determining the center of the object with image processing software to detect a contour of a human face.
14. The method of claim 1, further comprising: determining the camera noise value in a region including the center of the object.
15. A system comprising a computer programmed to perform the method of any of claims 1 to 14.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/326,581 US20220374628A1 (en) | 2021-05-21 | 2021-05-21 | Camera tampering detection |
US17/326,581 | 2021-05-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115379198A true CN115379198A (en) | 2022-11-22 |
Family
ID=83898713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210477881.0A Pending CN115379198A (en) | 2021-05-21 | 2022-04-29 | Camera tampering detection |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220374628A1 (en) |
CN (1) | CN115379198A (en) |
DE (1) | DE102022111221A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3181167A1 (en) | 2020-04-24 | 2021-10-28 | Alarm.Com Incorporated | Enhanced property access with video analytics |
US11983963B2 (en) * | 2020-07-24 | 2024-05-14 | Alarm.Com Incorporated | Anti-spoofing visual authentication |
CN118379668B (en) * | 2024-06-24 | 2024-09-17 | 中国科学技术大学 | Positioning analysis method, system, equipment and storage medium for time falsification |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SG10201705921VA (en) * | 2017-07-19 | 2019-02-27 | Nec Apac Pte Ltd | Method and apparatus for dynamically identifying a user of an account for posting images |
WO2019056310A1 (en) * | 2017-09-22 | 2019-03-28 | Qualcomm Incorporated | Systems and methods for facial liveness detection |
-
2021
- 2021-05-21 US US17/326,581 patent/US20220374628A1/en active Pending
-
2022
- 2022-04-29 CN CN202210477881.0A patent/CN115379198A/en active Pending
- 2022-05-05 DE DE102022111221.7A patent/DE102022111221A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
DE102022111221A1 (en) | 2022-11-24 |
US20220374628A1 (en) | 2022-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220374628A1 (en) | Camera tampering detection | |
US20220019810A1 (en) | Object Monitoring System and Methods | |
JP2022538557A (en) | Systems, methods, and computer programs for enabling operations based on user authorization | |
US11741747B2 (en) | Material spectroscopy | |
US20240087360A1 (en) | Material spectroscopy | |
JP6814334B2 (en) | CNN learning methods, testing methods, learning devices, and testing devices for blind spot monitoring | |
CN107909040B (en) | Car renting verification method and device | |
CN115393920A (en) | Counterfeit image detection | |
CN112115761B (en) | Countermeasure sample generation method for detecting vulnerability of visual perception system of automatic driving automobile | |
US20150077550A1 (en) | Sensor and data fusion | |
CN114821696A (en) | Material spectrometry | |
CN114821694A (en) | Material spectrometry | |
CN115376080B (en) | Camera identification | |
US20220374641A1 (en) | Camera tampering detection | |
CN103914684A (en) | Method and system for recorgnizing hand gesture using selective illumination | |
US11967184B2 (en) | Counterfeit image detection | |
Zadobrischi et al. | Benefits of a portable warning system adaptable to vehicles dedicated for seat belts detection | |
US20230260328A1 (en) | Biometric task network | |
US20240338970A1 (en) | Systems and methods for monitoring environments of vehicles | |
Kalikova et al. | Biometrie identification of persons in a vehicle | |
CN114581897A (en) | License plate recognition method, device and system for multiple types and multiple license plates | |
CN118196698A (en) | Big data driven intelligent traffic monitoring system | |
CN116612509A (en) | Biological feature task network | |
WO2023242871A1 (en) | Device, system and method for 3d face recognition and access management door lock | |
CN117521045A (en) | Spoofing images for user authentication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |