WO2023138629A1 - Dispositif et procédé d'obtention d'informations d'image chiffrées - Google Patents

Dispositif et procédé d'obtention d'informations d'image chiffrées Download PDF

Info

Publication number
WO2023138629A1
WO2023138629A1 PCT/CN2023/072973 CN2023072973W WO2023138629A1 WO 2023138629 A1 WO2023138629 A1 WO 2023138629A1 CN 2023072973 W CN2023072973 W CN 2023072973W WO 2023138629 A1 WO2023138629 A1 WO 2023138629A1
Authority
WO
WIPO (PCT)
Prior art keywords
image information
encrypted
mask
optical
convolution
Prior art date
Application number
PCT/CN2023/072973
Other languages
English (en)
Chinese (zh)
Inventor
陈宏伟
黄铮
史宛鑫
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2023138629A1 publication Critical patent/WO2023138629A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the field of optical instruments, and more particularly, to a device and method for acquiring encrypted image information.
  • biometric information can be extracted from images more and more. For example, it is easy to capture the user's face image from the photo, and it is even possible to capture the user's fingerprint, palm print, iris and other private information for identification on the photo, which poses a threat to the user's privacy.
  • image encryption mainly focuses on two levels: software and hardware: at the software level, pixelated blurring or replacement of sensitive parts is usually performed after obtaining high-fidelity visual images.
  • the limitation of focusing distance makes it impossible for a camera with a complex lens group to realize the miniaturization of the optical system structure, and the applicable scenarios are limited.
  • the present disclosure uses a mask to replace the lens group in the camera lens and collects The image is optically encrypted, thereby completing the optical encryption of the image, and the optical device used for encryption has a simple structure.
  • Embodiments of the present disclosure provide an apparatus and method for acquiring encrypted image information.
  • An embodiment of the present disclosure provides an encrypted image information acquisition device, comprising: at least one mask plate and an optical sensing component, wherein the at least one mask plate is used to receive object light of a target object, and generate an encrypted optical image signal based on the received object light, wherein the object light carries image information of the target object, and the encrypted optical image signal carries encrypted image information, wherein each mask plate includes a mask pattern; the optical sensing component is used to receive the encrypted optical image signal, and convert the encrypted optical image signal converting into an electrical signal, and outputting the electrical signal, wherein the electrical signal includes the encrypted image information of the target object.
  • receiving object light of a target object and generating an encrypted optical image signal based on the received object light includes: performing convolution processing on the image information based on the received object light through the mask pattern, wherein the convolution network for performing convolution processing includes at least one convolution layer.
  • the at least one mask layer is in one-to-one correspondence with the at least one convolutional layer, wherein the mask pattern on each mask layer carries parameter information for convolution processing of the corresponding convolutional layer, wherein the parameter information includes at least convolution kernel parameters.
  • the mask pattern of each mask plate is composed of a plurality of mask holes, and the light transmission degree of each mask hole is the same or different, wherein the light transmission degree of each mask hole is determined by the parameters of its corresponding convolution kernel.
  • the mask pattern is determined in the following manner: establish a convolution model of the mask pattern, and encrypt a training image used for training based on the convolution model of the mask pattern to obtain an encrypted training image; perform feature extraction on the encrypted training image by using a back-end neural network to obtain a feature extraction image; determine a loss function for signal processing of the back-end neural network based on the visual task to be completed; and based on the loss function, train the convolution model of the mask pattern to obtain a training image The parameter information of the subsequent convolutional model.
  • the target object is an object to be photographed.
  • the target object is an optical image signal carrying image information of the target object
  • the encrypted image information acquisition device further includes a light generating component for generating An optical image signal carrying image information, wherein the optical image signal carrying image information is an incoherent optical signal; wherein the light generating component is a plurality of point light sources, and the optical image signal carrying image information is an optical signal generated by the multiple point light sources; or the light generating component is a display, and the optical image signal carrying image information is a multi-pixel image generated by the display.
  • the at least one layer of mask is a layer of mask
  • the convolutional network includes a layer of convolution layer, wherein the distance d L between two adjacent point light sources or two adjacent pixels, the distance d LM between the light generating component and the mask, the distance d MS between the mask and the optical sensing component, and the size ⁇ of a single pixel on the optical sensing component have the following relationship: Wherein, a single pixel on the optical sensing component is equivalent to a single pixel generated after convolution calculation of the convolutional layer.
  • the mask pattern is obtained by coating or etching on each mask plate.
  • the encrypted image information acquisition device further includes: an information extraction component, configured to receive the electrical signal output by the optical sensing component, and perform feature extraction on the encrypted image information based on the electrical signal.
  • the information extraction component includes a deconvolution network and a feature extraction network, wherein the deconvolution network is used to perform deconvolution on the encrypted image information to obtain restored image information, and the feature extraction network is used to extract image features from the restored image information.
  • An embodiment of the present disclosure provides a method for obtaining encrypted image information, comprising: receiving object light of a target object through at least one mask plate, and generating an encrypted optical image signal based on the received object light, wherein the object light carries image information of the target object, and the encrypted optical image signal carries encrypted image information, wherein each mask plate includes a mask pattern; converting the encrypted optical image signal into an electrical signal, and outputting the electrical signal, wherein the electrical signal includes the encrypted image information of the target object .
  • the mask pattern is determined in the following manner: establish a convolution model of the mask pattern, and encrypt a training image used for training based on the convolution model of the mask pattern to obtain an encrypted training image; perform feature extraction on the encrypted training image by using a back-end neural network to obtain a feature extraction image; determine the mask pattern and a loss function for signal processing of the back-end neural network based on the visual task to be completed; and train the convolution model of the mask pattern based on the loss function. , to get the parameter information of the trained convolutional model.
  • the mask plate is used to replace the lens group in the camera lens and the collected image is optically encrypted, which protects user privacy, omits the demosaicing, noise reduction and other modules designed for perception enhancement in the image signal processing process of traditional cameras, simplifies the hardware structure, and realizes the miniaturization of the optical system structure; compared with digital encryption by other methods, it saves the amount of calculation, improves the encryption speed, and provides the possibility for real-time recognition.
  • FIG. 1 shows a schematic diagram of the structure of an encrypted image information acquisition device 100 according to an embodiment of the present disclosure
  • FIG. 2 shows another schematic diagram of the structure of an encrypted image information acquisition device 100 according to an embodiment of the present disclosure
  • FIG. 3 shows another schematic diagram of the structure of the encrypted image information acquisition device 100 according to an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of the distance relationship among the target object, the mask plate 1001, and the optical sensing component 1002 according to an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of a pattern of a mask plate 1001 according to an embodiment of the present disclosure
  • FIG. 6 shows a schematic flowchart of a method 600 for determining a mask pattern according to an embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of a U-Net network structure according to an embodiment of the present disclosure
  • FIG. 8 shows a schematic flowchart of a method 800 for obtaining encrypted image information according to an embodiment of the present disclosure. picture.
  • connection or “connect” are not limited to physical or mechanical connections, but may include electrical connections, no matter direct or indirect.
  • Optical encryption is a typical and efficient image encryption method. Its essence is to scramble and encode the internal information of the image through optical transformation processes such as interference, diffraction, and imaging to achieve encryption and achieve good encryption effects.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technique of computer science that attempts to understand the nature of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive subject that involves a wide range of fields, including both hardware-level technology and software-level technology.
  • Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes several major directions such as computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • Computer vision is a science that studies how to make machines "see”. To put it further, it refers to the use of cameras and computers instead of human eyes to identify, track and measure machine vision, and further graphics processing, so that computer processing becomes more suitable for human eyes to observe or send to instruments for detection.
  • Computer vision technology usually includes image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, 3D object reconstruction, 3D technology, virtual reality, augmented reality, simultaneous positioning and map construction technologies, as well as common face recognition, fingerprint recognition and other biometric recognition technologies.
  • computer processing can be used to obtain and provide more information in images (such as medical images) to users.
  • Machine learning is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Specializes in the study of how computers simulate or implement human learning behaviors to acquire new knowledge or skills, and reorganize existing knowledge structures to continuously improve their performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent, and its application pervades all fields of artificial intelligence.
  • Machine learning and deep learning usually include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching and learning.
  • Fig. 1 shows a schematic diagram of the structure of an encrypted image information acquiring device 100 according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an encrypted image information acquisition device 100, including: at least one mask 1001 and an optical sensing component 1002, wherein the at least one mask 1001 is used to receive object light 1004 of a target object 1003, and generate an encrypted optical image signal 1005 based on the received object light 1004, wherein the object light 1004 carries image information of the target object, and the encrypted optical image signal 1005
  • Each mask 1001 includes a mask pattern 1006; the optical sensing component 1002 is used to receive the encrypted optical image signal 1005, convert the encrypted optical image signal 1005 into an electrical signal, and output the electrical signal, wherein the electrical signal includes the encrypted image information of the target object.
  • the target object 1003 is an object to be photographed.
  • the object to be photographed may be a real person, object, animal, etc., and the target light carrying the image information of the object to be photographed enters the mask 1001 by reflecting light.
  • Any target object 1003 that can emit light and/or reflect light to the mask 1001 can use the encrypted image information acquisition device 100 described in this disclosure to perform optical encryption operations.
  • the light described in the present disclosure may refer to various forms of visible light or invisible light such as diffuse reflection light, laser light, monochromatic light, and polychromatic light.
  • the mask plate 1001 generally includes a transparent substrate and a light-shielding film, wherein common materials of the transparent substrate include: transparent glass (such as quartz glass, soda glass, low-expansion glass, etc.), transparent resin, and the like.
  • the light-shielding film usually has a hard light-shielding film (for example, chromium film, iron oxide, molybdenum silicide, etc.), latex, etc.
  • the light-shielding or light-transmitting patterns on the mask plate 1001 are currently realized by coating or etching.
  • the number of the mask 1001 can be one layer or multiple layers.
  • the multi-layer mask is placed in parallel between the target object and the photosensitive component.
  • the mask pattern on each mask plate is used to encrypt the received optical signal and generate encrypted optical image signal. Therefore, the processing result of one mask can be the output of the next mask. Incoming light signal.
  • the optical sensing component 1002 may be a photosensitive chip, a photoelectric conversion device, and the like.
  • common optical sensing components include CCD (Charge Coupled Device) type optical sensing components, CMOS (Metal Oxide Semiconductor Element) type optical sensing components, and the like.
  • CCD Charge Coupled Device
  • CMOS Metal Oxide Semiconductor Element
  • Each photosensitive element in CMOS directly integrates an amplifier and analog-to-digital conversion logic.
  • the photosensitive diode When the photosensitive diode receives light and generates an analog electrical signal, the electrical signal is first amplified by the amplifier in the photosensitive element, and then directly converted into a corresponding digital signal. Whether it is a CCD-type optical sensing component or a CMOS-type optical sensing component, the main purpose is to convert the collected optical signal into an electrical signal that can be processed by a subsequent circuit or computer. All devices that can convert an optical signal into a signal that can be used by a processing device can belong to the optical sensing component described in this disclosure.
  • the reflected sunlight when a person stands on the ground and reflects sunlight, the reflected sunlight carries face image information, the reflected sunlight enters the CMOS sensor through the mask, and along the propagation direction of the reflected sunlight, the mask is placed coaxially with the CMOS sensor, and the mask is as close as possible to the CMOS sensor.
  • the face image information in the reflected sunlight is encrypted through a mask, which carries the encrypted face image information.
  • the CMOS sensor receives the encrypted sunlight and generates an encrypted face image. Human eyes cannot directly recognize the content in the encrypted face image.
  • Fig. 2 shows another schematic diagram of the structure of the device 100 for acquiring encrypted image information according to an embodiment of the present disclosure.
  • the encrypted image information acquiring device 100 further includes a light generating unit 1006 for generating an optical image signal 1007 carrying image information, wherein the optical image signal 1007 carrying image information is an incoherent optical signal; wherein the light generating unit 1006 is a plurality of point light sources, and the optical image signal 1007 carrying image information is an optical signal generated by a plurality of point light sources; or the light generating unit 1006 is a display and carries an optical image signal 1007 carrying image information.
  • 007 is a multi-pixel image produced by a monitor.
  • the reticle 1001 receives the optical image signal 1007 carrying image information, and encrypts the image information using the same method as described above for the target object 1003.
  • Fig. 3 shows another schematic diagram of the structure of the encrypted image information acquiring device 100 according to an embodiment of the present disclosure.
  • the encrypted image information acquisition device 100 may further include an information extraction component 1008, configured to receive the electrical signal output by the optical sensing component 1002, and perform feature extraction on the encrypted image information based on the electrical signal.
  • the information extraction component 1008 described in this disclosure may be any component capable of calculating or processing signals, such as a computer, a processor, a server, or an integrated circuit, or a combination of one or more of the above components.
  • the information extraction component includes a deconvolution network and a feature extraction network, wherein the deconvolution network is used to deconvolute the encrypted image information to obtain restored image information, and the feature extraction network is used to extract image features from the restored image information.
  • the point spread function describes the response of the encrypted image information acquiring device 100 to a point source or a point object.
  • the extracted image features can be further applied to tasks such as feature classification or recognition.
  • FIG. 4 shows a schematic diagram of the distance relationship among the target object 1003 , the mask plate 1001 , and the optical sensing component 1002 according to an embodiment of the present disclosure.
  • At least one layer of mask 1001 is a layer of mask
  • the convolutional network includes a layer of convolution, wherein, the distance d L between two adjacent point light sources or two adjacent pixels, the distance d LM between the light generating component 1006 and the mask 1001, and the distance d MS between the mask 1001 and the optical sensing component 1002. Since light rays travel in straight lines, therefore, according to the triangle similarity law (ie, ), between the size ⁇ of a single pixel on the optical sensing component 1002 have the following relationship: Wherein, the single pixel size ⁇ on the optical sensing component 1002 is equivalent to a single pixel generated after the convolution calculation of the convolutional layer.
  • the distance d L between two adjacent point light sources or two adjacent pixels represents the resolution of the image information that can be acquired by the encrypted image information acquisition device 100, according to the formula It can be seen that the smaller the distance d MS between the mask plate 1001 and the optical sensing component 1002, and the larger the distance d LM between the light generating component 1006 and the mask plate 1001, the higher the resolution of the encrypted image information acquiring device 100, so the encrypted image information acquiring device 100 can be placed farther away from the light generating component 1006, and the mask plate 1001 should be placed as close as possible to the optical sensing component 1002, which can improve the resolution of the encrypted image information acquiring device 100 , and at the same time reduce the volume of the encrypted image information acquiring device 100 .
  • the distance between them can be obtained by measurement.
  • the distance measurement can be realized by using various high-precision distance measurement methods such as high-precision ruler, electrical distance measurement, and optical distance measurement.
  • the encrypted image information acquisition device 100 may have installation errors during installation, the calculation and measurement of ⁇ and d MS may also introduce errors. In order to ensure the accuracy of the encrypted image information acquisition device 100, the encrypted image information acquisition device 100 can be fine-tuned according to the encrypted results.
  • the process of fine-tuning the device is to verify whether the processing result of the device has an error from the theoretical value when the device is first installed, and if there is an error, the error is reduced through fine-tuning. For devices that have been debugged and verified, the fine-tuning process here is not necessary.
  • the encrypted image information acquisition device 100 After the encrypted image information acquisition device 100 is placed at a fixed position, it receives the object light 1004 of the target object 1003 and generates an encrypted optical image signal 1005. By comparing the resolution calculated by the encrypted image information acquisition device 100 with the resolution of the encrypted image information carried in the encrypted optical image signal 1005, the information loss during the image encryption process can be measured.
  • the resolution of the encrypted image information acquisition device can be adjusted.
  • FIG. 5 shows a schematic diagram of a pattern of a mask plate 1001 according to an embodiment of the present disclosure.
  • convolution processing is performed on the image information based on the received object light 104 through the mask pattern, at least one layer of mask 1001 corresponds to at least one convolution layer, and each layer of mask 1001
  • the mask pattern above carries the parameters of the convolution kernel used for convolution processing of the corresponding convolution layer.
  • the mask pattern of each mask plate 1001 is composed of a plurality of mask holes 1009, and the light transmission degree of each mask hole 1009 is the same or different, wherein the light transmission degree of each mask hole 1009 is determined by the parameters of its corresponding convolution kernel.
  • the convolution kernel can be a 0/1 binary distribution, 0 means opaque, and 1 means fully transparent.
  • the convolution kernel can also be an analog quantity, for example, 0 represents no light transmission, 1 represents full light transmission, and 0.5 represents half light transmission.
  • the mask hole 1009 on the mask plate 1001 has a binary distribution of 0/1, wherein 0 represents no light transmission, and 1 represents full light transmission. Then the convolution kernel corresponding to the mask hole 1009 on the mask plate 1001 is:
  • FIG. 6 shows a schematic flowchart of a method 600 for determining a mask pattern according to an embodiment of the disclosure.
  • a convolution model of the mask pattern is established, and based on the convolution model of the mask pattern, a training image used for training is encrypted to obtain an encrypted training image.
  • the training image can be an image carrying information such as people, objects, animals, text, etc.
  • the mask pattern carries parameters of the convolution kernel
  • a convolution model of the mask version is established, and the training image is convoluted through the convolution model to obtain an encrypted training image.
  • step S602 feature extraction is performed on the encrypted training image by using a back-end neural network to obtain a feature extraction image.
  • the back-end neural network includes a deconvolution network and a feature extraction network, wherein the deconvolution network is used to deconvolute the encrypted image information to obtain restored image information, and the feature extraction network is used to extract image features from the restored image information.
  • the deconvolution network can be inverse filtering or Wiener filtering.
  • the feature extraction network can be a network structure such as U-Net, LeNet-5, AlexNet, VGGNet, GoogLeNet, ResNet, DenseNet, etc., and determine a suitable feature extraction network for different feature extraction requirements.
  • Fig. 7 shows a schematic diagram of a U-Net network structure according to an embodiment of the present disclosure.
  • the U-Net network is a type of convolutional neural network, which was originally applied in the field of biological images, such as cell segmentation.
  • the overall structure of the U-Net network includes the encoding stage on the left and the decoding stage on the right.
  • the encoding stage on the left is also called shrinkage path
  • the shrinkage path structure is the basic convolutional neural network structure, including basic convolutional layers, downsampling layers, and activation function layers. Since the feature image of each layer needs to be shrunk, there is no need to add additional padding when performing operations such as convolution.
  • the decoding stage on the right side of the network structure is also called the expansion path.
  • the network structure of the expansion path usually has a mirror image relationship with the contraction path.
  • the U-Net network inputs an encrypted training image with a size of 572*572.
  • the convolution kernel uses two repeated 3 ⁇ 3 convolution kernels.
  • the downsampling layer uses a 2 ⁇ 2 pooling layer with a step size of 2.
  • the activation function selects the linear activation unit RELU.
  • a 1 ⁇ 1 convolution kernel is used for classification to obtain each final classification prediction result.
  • U-net can process input images of any size, and can use very few images for training.
  • the specific network structure can be adjusted according to the needs of actual tasks.
  • a network structure for classification can be connected, such as inception-v2 network structure and inception-v3 network structure. It should be understood that these are just some examples of network structures for classification, and all network structures for implementing feature classification can be applied to the present disclosure.
  • step S603 based on the vision task to be completed, the mask pattern and the loss function of the signal processing of the back-end neural network are determined.
  • different loss functions are designed for different task objectives.
  • the loss function can be the cosine distance between the encrypted face image and the restored face image, and the weighted sum of the distance between the triplets of the face feature image after feature extraction, which is used to evaluate the degree of deviation between the predicted value and the real value.
  • step S604 based on the loss function, the convolution model of the mask pattern is trained to obtain parameter information of the trained convolution model.
  • a mask pattern is determined according to the convolution kernel parameters of the trained convolution model, and the mask pattern is obtained by coating or etching on the mask plate 1001 .
  • the optical encryption is combined with the back-end neural network to realize the joint optimization of the whole link through an end-to-end network, realize the optimization of the mask, and improve the recovery and recognition accuracy of encrypted images.
  • Fig. 8 shows a schematic flowchart of a method 800 for acquiring encrypted image information according to an embodiment of the present disclosure.
  • step S801 the object light of the target object is received through at least one mask plate, and an encrypted optical image signal is generated based on the received object light, wherein the object light carries image information of the target object, and the encrypted optical image signal carries encrypted image information, wherein each mask plate includes a mask pattern.
  • the image information is convoluted based on the received object light 104 through a mask pattern, at least one mask 1001 corresponds to at least one convolution layer, and the mask pattern on each mask 1001 carries the parameters of the convolution kernel of the corresponding convolution layer for convolution processing.
  • step S802 the encrypted optical image signal is converted into an electrical signal, and the electrical signal is output, wherein the electrical signal includes the encrypted image information of the target object.
  • the encryption process can be realized through convolution in the optical domain, and the result collected by the optical sensing component is no longer the actual optical image, but the convolution information after the actual image information is processed by convolution, so the protection and encryption of the actual image information can be realized in this way.
  • the information extraction component 1008 can be used to receive the electrical signal output by the optical sensing component 1002, and continue to process the image information based on the electrical signal.
  • the mask plate is used to replace the lens group in the camera lens and the collected image is optically encrypted, which protects user privacy, omits demosaicing, noise reduction and other modules designed for perception enhancement in the image signal processing process of traditional cameras, simplifies the hardware structure, and realizes the miniaturization of the optical system structure.
  • each block in the flowchart or block diagram may represent a module, program segment, or a portion of code that includes at least one executable instruction for implementing a specified logical function.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • the various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software, which may be executed by a controller, microprocessor or other computing device.
  • aspects of embodiments of the present disclosure are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it will be understood that the blocks, apparatus, systems, techniques or methods described herein may be implemented, by way of non-limiting example, in hardware, software, firmware, special purpose circuits or logic, general purpose hardware or a controller or other computing device, or some combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioethics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Studio Devices (AREA)

Abstract

Des modes de réalisation de la présente invention concernent un dispositif et un procédé d'obtention d'informations d'image chiffrées. Le dispositif d'obtention d'informations d'image chiffrées selon la présente invention comprend au moins une couche de masque et un composant de détection optique ; la au moins une couche de masque est utilisée pour recevoir une lumière d'objet d'un objet cible et générer un signal d'image optique chiffré sur la base de la lumière d'objet reçue, la lumière d'objet transportant des informations d'image de l'objet cible, le signal d'image optique chiffré transportant des informations d'image chiffrées et chaque couche de masque comprenant un motif de masque ; le composant de détection optique est utilisé pour recevoir le signal d'image optique chiffré, convertir le signal d'image optique chiffré en un signal électrique et délivrer en sortie le signal électrique, le signal électrique comprenant les informations d'image chiffrées de l'objet cible. Ainsi, l'image est chiffrée optiquement à l'étape d'acquisition d'image et une miniaturisation structurelle du système optique est mise en œuvre pour un chiffrement optique.
PCT/CN2023/072973 2022-01-21 2023-01-18 Dispositif et procédé d'obtention d'informations d'image chiffrées WO2023138629A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210072981.5A CN114491592A (zh) 2022-01-21 2022-01-21 加密图像信息获取装置及方法
CN202210072981.5 2022-01-21

Publications (1)

Publication Number Publication Date
WO2023138629A1 true WO2023138629A1 (fr) 2023-07-27

Family

ID=81472818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/072973 WO2023138629A1 (fr) 2022-01-21 2023-01-18 Dispositif et procédé d'obtention d'informations d'image chiffrées

Country Status (2)

Country Link
CN (1) CN114491592A (fr)
WO (1) WO2023138629A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114491592A (zh) * 2022-01-21 2022-05-13 清华大学 加密图像信息获取装置及方法
TWI810057B (zh) * 2022-09-06 2023-07-21 國立陽明交通大學 加密式光學系統
CN115457299B (zh) * 2022-11-14 2023-03-31 中国科学院光电技术研究所 传感芯片投影光刻机匹配方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087503A (zh) * 2011-01-11 2011-06-08 浙江师范大学 一种双随机相位光学彩色图像加密装置及方法
WO2021075527A1 (fr) * 2019-10-18 2021-04-22 国立大学法人大阪大学 Caméra et système d'imagerie
CN113298060A (zh) * 2021-07-27 2021-08-24 支付宝(杭州)信息技术有限公司 保护隐私的生物特征识别方法和装置
CN113484281A (zh) * 2021-05-28 2021-10-08 太原理工大学 一种基于生物组织独特光散射特性的光学加密装置及方法
CN114491592A (zh) * 2022-01-21 2022-05-13 清华大学 加密图像信息获取装置及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102087503A (zh) * 2011-01-11 2011-06-08 浙江师范大学 一种双随机相位光学彩色图像加密装置及方法
WO2021075527A1 (fr) * 2019-10-18 2021-04-22 国立大学法人大阪大学 Caméra et système d'imagerie
CN113484281A (zh) * 2021-05-28 2021-10-08 太原理工大学 一种基于生物组织独特光散射特性的光学加密装置及方法
CN113298060A (zh) * 2021-07-27 2021-08-24 支付宝(杭州)信息技术有限公司 保护隐私的生物特征识别方法和装置
CN114491592A (zh) * 2022-01-21 2022-05-13 清华大学 加密图像信息获取装置及方法

Also Published As

Publication number Publication date
CN114491592A (zh) 2022-05-13

Similar Documents

Publication Publication Date Title
WO2023138629A1 (fr) Dispositif et procédé d'obtention d'informations d'image chiffrées
WO2021018163A1 (fr) Procédé et appareil de recherche de réseau neuronal
WO2021164234A1 (fr) Procédé de traitement d'image et dispositif de traitement d'image
EP3816929B1 (fr) Procédé et appareil de restauration d'une image
CN112767466B (zh) 一种基于多模态信息的光场深度估计方法
CN112102182B (zh) 一种基于深度学习的单图像去反射方法
Guo et al. Deep spatial-angular regularization for light field imaging, denoising, and super-resolution
An et al. TR-MISR: Multiimage super-resolution based on feature fusion with transformers
WO2019054092A1 (fr) Dispositif et procédé de génération d'image
WO2019071433A1 (fr) Procédé, système et appareil de reconnaissance de motifs
Yu et al. Multiple attentional path aggregation network for marine object detection
WO2021045599A1 (fr) Procédé d'application d'effet bokeh sur une image vidéo et support d'enregistrement
CN115131281A (zh) 变化检测模型训练和图像变化检测方法、装置及设备
Zhang et al. From compressive sampling to compressive tasking: retrieving semantics in compressed domain with low bandwidth
Chang et al. Deep learning based image Super-resolution for nonlinear lens distortions
Hsu et al. Object detection using structure-preserving wavelet pyramid reflection removal network
CN113628134B (zh) 图像降噪方法及装置、电子设备及存储介质
Hong et al. Reflection removal with NIR and RGB image feature fusion
Cheng et al. A mutually boosting dual sensor computational camera for high quality dark videography
Zhang et al. Hand gestures recognition in videos taken with a lensless camera
Wang et al. PMSNet: Parallel multi-scale network for accurate low-light light-field image enhancement
CN116977804A (zh) 图像融合方法、电子设备、存储介质及计算机程序产品
Farhood et al. 3D point cloud reconstruction from a single 4D light field image
CN115330655A (zh) 一种基于自注意力机制的图像融合方法及系统
Wang et al. RGNAM: recurrent grid network with an attention mechanism for single-image dehazing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23742944

Country of ref document: EP

Kind code of ref document: A1