CN117313869B - Large model privacy protection reasoning method based on model segmentation - Google Patents
Large model privacy protection reasoning method based on model segmentation Download PDFInfo
- Publication number
- CN117313869B CN117313869B CN202311418709.9A CN202311418709A CN117313869B CN 117313869 B CN117313869 B CN 117313869B CN 202311418709 A CN202311418709 A CN 202311418709A CN 117313869 B CN117313869 B CN 117313869B
- Authority
- CN
- China
- Prior art keywords
- model
- client
- server
- layers
- segmentation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000011218 segmentation Effects 0.000 title claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 20
- 238000007906 compression Methods 0.000 claims abstract description 7
- 230000006835 compression Effects 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 abstract description 6
- 238000013473 artificial intelligence Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 7
- 238000013508 migration Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Bioethics (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Storage Device Security (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a large model privacy protection reasoning method based on model segmentation, and belongs to the technical field of computer artificial intelligence and large model security. Segmentation by model: deploying the Encoder and the Decode of the original large model at the client, and leaving the middle part of the large model at the local of the server; model compression: the middle layer is compressed through the server side and then sent to the client side, and a small model with the basic function of the original large model is formed at the client side; fine tuning of the model: the client side fine-tunes the model through a loss function; model reasoning: the client sends the trained Encoder to the server according to the protocol, the intermediate result is obtained by reasoning, and the training is completed by combining with the Decoder of the local server. The invention gives consideration to the performance and privacy protection of the model, effectively prevents reconstruction attack, and has no negative influence on the effect of the model; meanwhile, model privacy leakage of a large model and data leakage of a user are prevented, the calculation efficiency is high, and a large amount of calculation resources are not needed at a client.
Description
Technical Field
The invention relates to the technical field of computer artificial intelligence and large model security, in particular to a large model privacy protection reasoning method based on model segmentation.
Background
The large model privacy protection reasoning based on model segmentation has very important application in the artificial intelligence field and the large model security field. Model segmentation techniques refer to the segmentation of a complete neural network model into two or more sub-modules, which are then processed separately to accomplish different tasks. The core idea is to make the model easier to understand, optimize and debug through the modular design. Model segmentation technology originated in the 90 s of the 20 th century, and early work mainly adopted a simple series module connection mode. Into the 21 st century, more complex tree structures and multi-branch connections were proposed. Today, large models of artificial intelligence, which are neural network models of extremely large parameter scale, typically with billions to billions of parameters, are in the fast-evolving phase, and universal language or visual capabilities are obtained by pre-training on massive data. Early large models were GPT series, BERT, etc. language models and Vision Transformer visual models. The parameters of the last two years have been increasing explosively, and models of the billion parameter level such as GPT-3 and Switch Transformer are presented, and the parameters are expected to continue to increase rapidly with further increase in computing power.
Nowadays, with the development of deep learning and large models, model segmentation techniques are widely used in fields such as natural language processing and computer vision. The model segmentation technology reduces training difficulty through modularized design, improves the capability of the model to adapt to new tasks, and is an important technical means for realizing migration learning. With the continuous development of the transfer learning technology, the emerging Offsite-Tuning technology adopts a privacy protection method, so that a data owner does not need to share sensitive data of the data owner to a model owner. Traditional migration learning methods may require the data owners to share their data and pay an expensive fee in order for the model owners to be able to make full fine-Tuning, while Offsite-Tuning reduces the need to share the data by sending lightweight adapters and simulators to the data owners, thereby reducing costs. For large base models, offsite-Tuning is more computationally efficient, and the data owner only needs to fine tune the adapter locally, without accessing the full model weights, thus saving a lot of computation time and resources.
However, the migration learning also has some security problems, such as model leakage by using model extraction for the countermeasure sample attack of the source model or the target model, and privacy leakage risks possibly existing in the sample data of the source domain and the target domain, so the invention provides a large model privacy protection reasoning method based on model segmentation to solve the problems.
Disclosure of Invention
The invention aims to provide a large model privacy protection reasoning method based on model segmentation, which takes model performance and privacy protection into account, effectively prevents reconstruction attack, and has no negative influence on the effect of the model; meanwhile, the model segmentation technology prevents model privacy leakage of a large model and data leakage of a user, so that the calculation efficiency is high, and a large amount of calculation resources are not needed at a client.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a large model privacy protection reasoning method based on model segmentation comprises the following steps:
s1, model segmentation: deploying the front n layers of encoders and the last n layers of encoders of the original large model at the client, and leaving the middle part of the large model at the local of the server;
s2, model compression: the middle layer is compressed through the server side and then sent to the client side, and a small model with the basic function of the original large model is formed at the client side;
s3, fine adjustment of a model: the client side fine-tunes the model through a loss function;
s4, model reasoning: the client sends the trained front n layers of encodings to the server according to the protocol, then performs reasoning to obtain an intermediate result, and then completes training by combining with the last n layers of encodings of the local server to obtain a final result.
Preferably, in step S1, the original data of the client is used to train n model layers without leaving the local area, and the trained Encoder and the Encoder are obtained by training the n layers before the client completes the large model and training the last n layers.
Preferably, in step S2, the remaining intermediate layer in step S1 is compressed to form a simulator module for providing an approximate gradient direction in the adaptation process, the simulator module is sent from the server side to the client side, and the Encoder and the Decoder deployed in the client side and the original large model are combined to form a complete small model.
Preferably, in step S3, the small model obtained in step S2 is trimmed by a loss function, where the loss function L is:
wherein L is 1 Is the loss function of the original large model task, L 2 Is f 2 、f' 2 、f' n-2 Cosine distance f of (f) 2 As an intermediate feature, f' 2 Is f n-2 Intermediate features to the original large model.
Preferably, in step S4, the Encoder performs task fine tuning on the client, then combines with the server, performs joint training with the rest of the large model, coordinates and accords with model parameters between the server and the client, and finally sends a trained intermediate result back to the client, combines with the trained Encoder, refines the model, and obtains final output.
Therefore, the large model privacy protection reasoning method based on model segmentation has the following beneficial effects:
1. according to the invention, the model privacy of the server side and the data privacy of the user side can be simultaneously protected by deploying part of the large model on the local side of the client side and training the part of the large model.
2. The middle part of the large model is compressed into a simulator by locally utilizing a fine tuning method of a model compression technology at the client side, so that the client side is assisted to locally perform fine tuning of the Encoder and the Decoder, meanwhile, the loss functions of model performance and privacy protection are balanced, the performance in fine tuning of the model is not reduced, and reconstruction attacks are prevented.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical scheme of the invention is further described below through the attached drawings and the embodiments.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
Examples
As shown in FIG. 1, the invention provides a large model privacy protection reasoning method based on model segmentation, which comprises the following steps:
1. (step S1) model segmentation: the first n layers of encodings and the last n layers of encodings of the original large model are deployed at the client, and the middle part of the large model is left at the local of the server.
The original large model comprises a transducer model, a Seq2Seq model and the like; first n layersSet to w 1 、w 1 Etc.; the last n layers are set to w n-1 、w n Etc.
The training of the front n layers and the last n layers is completed at the client to obtain the trained Encoder and the trained Decoder, and the strategy ensures that the data of the client are invisible to the server in the training process, and the original data of the client are not separated from the local and are only used for training the n key model layers. Sensitive data can be protected, the sensitive data cannot be exposed in network transmission, and the risk of data leakage is reduced. Only the first n layers and the last n layers need to be trained at the client, compared with the whole large model, the required computing resources are greatly reduced, and even on equipment with limited resources, the training can be easily performed without huge computing clusters, so that the cost and complexity of the training process are reduced by the effective computing resource utilization mode.
2. (step S2) model compression: and compressing the middle layer through the server side, sending the compressed middle layer to the client side, and forming a small model with the basic function of the original large model at the client side.
1) The server compresses the remaining intermediate layer R of step S1 to form a simulator module E for providing an approximate gradient direction during the adaptation process, which contains the main functional information of the original model and is a fixed untrainable part.
2) The building process of the simulator module occurs at the server side, wherein the middle layer is processed by a well-designed compression algorithm to ensure that the size of the simulator is minimized while retaining the key model functional characteristics. This compression process is lossy, and aims to preserve the main information of the model, while reducing the size of the simulator module so that it can be efficiently transmitted over the network.
3) And sending the obtained simulator module to a client from a server, and combining the client with the original large model deployed Encoder and Decoder to form a complete small model. The small model has enough performance and functions to complete specific tasks, and is assisted by a simulator module, and a user adjusts an Encoder and a Decode by using own data with the assistance of the small model.
3. (step S3) fine tuning of the model: the client fine-tunes the model through the loss function.
And (3) fine tuning the small model obtained in the step (S2) by using a loss function, wherein the specific formula is as follows:
wherein L is 1 Is the loss function of the original large model task, L 2 Is f 2 、f' 2 、f' n-2 Cosine distance f of (f) 2 As an intermediate feature, f' 2 Is f n-2 Intermediate features to the original large model.
1) Loss function L of original large model task 1 Generally, depending on the nature of the particular task, may be cross entropy loss, maximum likelihood loss, or other loss function suitable for the task, L 1 The function of (2) is to ensure that the fine-tuned small model keeps high performance on the original task, i.e. the performance of the model is not degraded, and the part of the loss function ensures the effectiveness of the model on the task.
2) Cosine distance L 2 Similarity of models in intermediate feature space is measured by minimizing L 2 Can be protected against reconstruction attacks (i.e. an attacker tries to reconstruct the original data from the intermediate features), and L 2 The introduction of loss enhances the safety of the model, and ensures that the intermediate representation of the model is not easy to leak sensitive information.
4. (step S4) model reasoning: the client sends the trained front n layers of encodings to the server according to the protocol, then performs reasoning to obtain an intermediate result, and then completes training by combining with the last n layers of encodings of the local server to obtain a final result.
The client sends the adjusted and trained Encoder to the server, and the Encoder is adjusted by the task to adapt to the specific application scene or task requirement; the server-side will combine with this Encoder and co-train with the rest of the large model. This process ensures consistent model parameters between the server and client, thus maintaining the performance and functionality of the model to the greatest extent.
The server side sends trained intermediate results back to the client side, wherein the intermediate results comprise further training results of the model by the server side and possible performance improvement; the client combines these intermediate results with the trained encodings of its own, further refining the model, resulting in a final output. Step S4 is the last loop of the overall process, aimed at ensuring that the model reaches an optimal performance level with the cooperation of the server side and the client side.
Therefore, the invention adopts a large model privacy protection reasoning method based on model segmentation, combines model performance and privacy protection, effectively prevents reconstruction attack, and has no negative influence on the effect of the model; meanwhile, the model segmentation technology prevents model privacy leakage of a large model and data leakage of a user, so that the calculation efficiency is high, and a large amount of calculation resources are not needed at a client.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention and not for limiting it, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that: the technical scheme of the invention can be modified or replaced by the same, and the modified technical scheme cannot deviate from the spirit and scope of the technical scheme of the invention.
Claims (3)
1. The large model privacy protection reasoning method based on model segmentation is characterized by comprising the following steps of:
s1, model segmentation: deploying the front n layers of encoders and the last n layers of encoders of the original large model at the client, and leaving the middle part of the large model at the server;
s2, model compression: the middle layer is compressed through the server side and then sent to the client side, and a small model with the basic function of the original large model is formed at the client side;
s3, fine adjustment of a model: the client side carries out fine adjustment on the small model obtained in the step S2 through a loss function;
s4, model reasoning: the client sends the trained front n layers of encodings to the server according to the protocol, then performs reasoning to obtain an intermediate result, and then completes training by combining with the last n layers of encodings of the client to obtain a final result;
in step S2, the rest intermediate layer in step S1 is compressed to form a simulator module for providing approximate gradient direction in the adapting process, the simulator module is sent to the client side from the server side, and the client side is combined with the Encoder and the Decode deployed by the original large model to form a complete small model.
2. The large model privacy preserving reasoning method based on model segmentation as set forth in claim 1, wherein: in step S1, the original data of the client is used for training 2n model layers without leaving the local area, and the trained Encoder and the Encoder are obtained by training the n layers before and the last n layers before the client completes the large model.
3. The large model privacy preserving reasoning method based on model segmentation as set forth in claim 2, wherein: in step S4, the Encoder is combined with the server after the client performs task fine tuning, and is combined with the rest of the large model for training, model parameters between the server and the client are coordinated and consistent, and finally the server sends a trained intermediate result back to the client and is combined with the last n layers of decoders for training, so that a final result is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311418709.9A CN117313869B (en) | 2023-10-30 | 2023-10-30 | Large model privacy protection reasoning method based on model segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311418709.9A CN117313869B (en) | 2023-10-30 | 2023-10-30 | Large model privacy protection reasoning method based on model segmentation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117313869A CN117313869A (en) | 2023-12-29 |
CN117313869B true CN117313869B (en) | 2024-04-05 |
Family
ID=89242714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311418709.9A Active CN117313869B (en) | 2023-10-30 | 2023-10-30 | Large model privacy protection reasoning method based on model segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117313869B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942147A (en) * | 2019-11-28 | 2020-03-31 | 支付宝(杭州)信息技术有限公司 | Neural network model training and predicting method and device based on multi-party safety calculation |
CN111832729A (en) * | 2020-07-06 | 2020-10-27 | 东南数字经济发展研究院 | Distributed deep learning reasoning deployment method for protecting data privacy |
CN114140478A (en) * | 2022-01-30 | 2022-03-04 | 电子科技大学 | Federal learning method, system, device and medium for medical image segmentation |
CN114723057A (en) * | 2022-03-31 | 2022-07-08 | 北京理工大学 | Neural network collaborative reasoning method for multi-access edge computing system |
CN114912132A (en) * | 2022-05-11 | 2022-08-16 | 南京大学 | Method for realizing privacy protection convolutional neural network reasoning based on model conversion |
CN115775010A (en) * | 2022-11-23 | 2023-03-10 | 国网江苏省电力有限公司信息通信分公司 | Electric power data sharing method based on horizontal federal learning |
WO2023050754A1 (en) * | 2021-09-30 | 2023-04-06 | 清华大学 | Model training method and apparatus for private data set |
CN116167084A (en) * | 2023-02-24 | 2023-05-26 | 北京工业大学 | Federal learning model training privacy protection method and system based on hybrid strategy |
KR20230084407A (en) * | 2021-12-03 | 2023-06-13 | 연세대학교 산학협력단 | An artificial intelligence-based privacy-preserving distribution method for vertically, horizontally and multi-partitioned data and a device thereof |
CN116582242A (en) * | 2023-04-14 | 2023-08-11 | 南京大学 | Safe federal learning method of ciphertext and plaintext hybrid learning mode |
CN116579418A (en) * | 2023-05-18 | 2023-08-11 | 杭州电子科技大学 | Privacy data protection method for model segmentation optimization under federal edge learning environment |
CN116739079A (en) * | 2023-05-10 | 2023-09-12 | 浙江大学 | Self-adaptive privacy protection federal learning method |
CN116805082A (en) * | 2023-08-23 | 2023-09-26 | 南京大学 | Splitting learning method for protecting private data of client |
-
2023
- 2023-10-30 CN CN202311418709.9A patent/CN117313869B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942147A (en) * | 2019-11-28 | 2020-03-31 | 支付宝(杭州)信息技术有限公司 | Neural network model training and predicting method and device based on multi-party safety calculation |
CN111832729A (en) * | 2020-07-06 | 2020-10-27 | 东南数字经济发展研究院 | Distributed deep learning reasoning deployment method for protecting data privacy |
WO2023050754A1 (en) * | 2021-09-30 | 2023-04-06 | 清华大学 | Model training method and apparatus for private data set |
KR20230084407A (en) * | 2021-12-03 | 2023-06-13 | 연세대학교 산학협력단 | An artificial intelligence-based privacy-preserving distribution method for vertically, horizontally and multi-partitioned data and a device thereof |
CN114140478A (en) * | 2022-01-30 | 2022-03-04 | 电子科技大学 | Federal learning method, system, device and medium for medical image segmentation |
CN114723057A (en) * | 2022-03-31 | 2022-07-08 | 北京理工大学 | Neural network collaborative reasoning method for multi-access edge computing system |
CN114912132A (en) * | 2022-05-11 | 2022-08-16 | 南京大学 | Method for realizing privacy protection convolutional neural network reasoning based on model conversion |
CN115775010A (en) * | 2022-11-23 | 2023-03-10 | 国网江苏省电力有限公司信息通信分公司 | Electric power data sharing method based on horizontal federal learning |
CN116167084A (en) * | 2023-02-24 | 2023-05-26 | 北京工业大学 | Federal learning model training privacy protection method and system based on hybrid strategy |
CN116582242A (en) * | 2023-04-14 | 2023-08-11 | 南京大学 | Safe federal learning method of ciphertext and plaintext hybrid learning mode |
CN116739079A (en) * | 2023-05-10 | 2023-09-12 | 浙江大学 | Self-adaptive privacy protection federal learning method |
CN116579418A (en) * | 2023-05-18 | 2023-08-11 | 杭州电子科技大学 | Privacy data protection method for model segmentation optimization under federal edge learning environment |
CN116805082A (en) * | 2023-08-23 | 2023-09-26 | 南京大学 | Splitting learning method for protecting private data of client |
Non-Patent Citations (4)
Title |
---|
JIUYUN XU 等.IFTS: A Location Privacy Protection Method Based on Initial and Final Trajectory Segments.《Digital Object Identifier》.2021,第9卷18112-18122. * |
任奎 等.人工智能模型数据泄露的攻击与防御研究综述.《网络与信息安全学报》.2021,第7卷(第1期),1-10. * |
周俊 等.联邦学习安全与隐私保护研究综述.《西华大学学报(自然科学版)》.2020,(第04期),21-29. * |
张小玉 等.基于属性分割的差分隐私异构多属性数据发布.《计算机系统应用》.2022,第31卷(第10期),225-235. * |
Also Published As
Publication number | Publication date |
---|---|
CN117313869A (en) | 2023-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Krishnaraj et al. | Deep learning model for real-time image compression in Internet of Underwater Things (IoUT) | |
Dong et al. | Semantic communication system based on semantic slice models propagation | |
Sun et al. | Deep pixel‐to‐pixel network for underwater image enhancement and restoration | |
CN111681154B (en) | Color image steganography distortion function design method based on generation countermeasure network | |
CN113315972A (en) | Video semantic communication method and system based on hierarchical knowledge expression | |
Zhang et al. | Optical image compression and encryption transmission-based ondeep learning and ghost imaging | |
CN116958534A (en) | Image processing method, training method of image processing model and related device | |
CN116258757A (en) | Monocular image depth estimation method based on multi-scale cross attention | |
JP2023001926A (en) | Method and apparatus of fusing image, method and apparatus of training image fusion model, electronic device, storage medium and computer program | |
CN117648994A (en) | Efficient heterogeneous longitudinal federal learning method based on unsupervised learning | |
CN115766159A (en) | Private data processing method and device and electronic equipment | |
CN117633707A (en) | Fine-grained multi-mode Chinese large language model construction method and computer storage medium | |
Ren et al. | Knowledge base enabled semantic communication: A generative perspective | |
CN117313869B (en) | Large model privacy protection reasoning method based on model segmentation | |
CN117879765A (en) | Combined information source channel coding method oriented to image semantic communication | |
CN117939416A (en) | Image self-adaptive semantic communication method with compression transmission and privacy protection | |
Nakahara et al. | Edge computing-assisted DNN image recognition system with progressive image retransmission | |
Ren et al. | Asymmetric Semantic Communication System Based on Diffusion Model in IoT | |
CN115761242B (en) | Denoising method and terminal based on convolutional neural network and fuzzy image characteristics | |
CN114694065A (en) | Video processing method, device, computer equipment and storage medium | |
Zhou et al. | Speech Semantic Communication Based On Swin Transformer | |
Choi et al. | Face Photo-Sketch Synthesis Via Domain-Invariant Feature Embedding | |
CN116862803B (en) | Reverse image reconstruction method, device, equipment and readable storage medium | |
Qiao et al. | Dual‐route synthetic‐to‐real adaption for single image dehazing | |
WO2024021075A1 (en) | Training method, model usage method, and wireless communication method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |