CN116092644A - Medical process auxiliary management system based on core algorithm and virtual reality technology - Google Patents

Medical process auxiliary management system based on core algorithm and virtual reality technology Download PDF

Info

Publication number
CN116092644A
CN116092644A CN202211738137.8A CN202211738137A CN116092644A CN 116092644 A CN116092644 A CN 116092644A CN 202211738137 A CN202211738137 A CN 202211738137A CN 116092644 A CN116092644 A CN 116092644A
Authority
CN
China
Prior art keywords
virtual reality
image
patient
transmission node
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211738137.8A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Shengyue Information Technology Co ltd
Original Assignee
Yunnan Shengyue Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Shengyue Information Technology Co ltd filed Critical Yunnan Shengyue Information Technology Co ltd
Priority to CN202211738137.8A priority Critical patent/CN116092644A/en
Publication of CN116092644A publication Critical patent/CN116092644A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N10/00Quantum computing, i.e. information processing based on quantum-mechanical phenomena
    • G06N10/60Quantum algorithms, e.g. based on quantum optimisation, quantum Fourier or Hadamard transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computational Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The medical process auxiliary management system based on the core algorithm and the virtual reality technology comprises a patient individual image acquisition module, a virtual reality environment module and an auxiliary diagnosis module, wherein the patient individual image acquisition module is used for acquiring a human body local tissue image and a human body local X-ray image, the virtual reality environment module is used for keeping the information synchronization of the patient image and the actually acquired image in the virtual reality environment, converting patient image data into a virtual image and displaying the virtual image, comparing the similarity between the human body local tissue image and the X-ray image, predicting the effect of the virtual reality medical process, and the auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis. According to the invention, the M-ASA algorithm is adopted to synchronize the image information of the patient, the S-ADH algorithm is adopted to compare the similarity of the images of the patient, the QGA-LVQ neural network is adopted to predict the effect of the virtual reality medical process, so that the rapid auxiliary diagnosis of the medical process auxiliary system is realized, and a better scheme is provided for the medical process auxiliary system based on the virtual reality technology.

Description

Medical process auxiliary management system based on core algorithm and virtual reality technology
Technical Field
The invention relates to the field of virtual reality, image processing and artificial intelligence, in particular to a medical process auxiliary management system based on a core algorithm and a virtual reality technology.
Background
With the rapid development of science and technology, virtual reality and artificial intelligence have become important branches of the emerging technology nowadays, and as research hotspots at the present stage, the virtual reality and artificial intelligence are required to be equipped with scientific and reasonable machine learning algorithms to meet social demands and improve the technical guarantee of virtual reality technology, and by combining with the artificial intelligence technology, the accuracy of image information of patients is enhanced by means of various training optimization algorithms, and the machine learning capacity is enhanced.
The virtual reality technology has the virtualization beyond reality, is a new computer technology developed along with the multimedia technology, utilizes a three-dimensional graph generation technology, a multi-sensor interaction technology and a high-resolution display technology to generate a three-dimensional vivid virtual environment, merges multiple information technology branches such as digital image processing, computer graphics, multimedia technology, sensor technology and the like, greatly advances the development of the computer technology, and the application of the virtual reality technology has been greatly stepped into the medical process auxiliary industry.
The machine learning is a kind of computer science branch converted based on pattern recognition and artificial intelligence computing learning theory, and is widely applied to various related fields, the machine learning is a multi-field comprehensive discipline, which relates to algorithm complexity theory, approximation theory, statistics, probability theory and other theories, the discipline is how the computer simulates and realizes human learning behaviors as main research contents, the research computer acquires new knowledge and skill modes, the existing knowledge structure is reorganized to realize continuous optimization of self performance, the machine learning is an artificial intelligence core, the computer has an intelligent basic mode, the machine learning is gradually applied to various fields related to the artificial intelligence, the comprehensive and induction are mainly not deducted, and according to the research, the larger the processing data scale in most states is, the machine learning model efficiency is higher, the machine learning is a main mode of large data intelligent analysis, the machine learning is a large data important processing mode in the current stage, various advantages can be combined, the most suitable processing mode can be selected aiming at specific problems, the machine learning can break through the limitation caused by human factors, the deep learning and the deep learning, the data is difficult to process the data, the current data is difficult to be processed by the traditional network, the data is difficult to process, the data is processed by the large data, the current data is difficult to be processed, the data is difficult to be processed by the network, and the traditional data is difficult to be processed, and the data is difficult to be processed by the network, and has high-quality information.
The medical process auxiliary management system based on the core algorithm and the virtual reality technology collects local tissue images of a patient through a CCD camera, collects X-ray images of the local tissue of the patient through an X-ray machine, synchronizes the image information of the patient through an M-ASA algorithm in combination with the information technology, compares the similarity of the images of the patient through an S-ADH algorithm and marks the difference part, predicts the effect of the virtual reality medical process through a QGA-LVQ neural network, rapidly realizes intelligent image diagnosis, and has traceable all data, effectively improves the working effect of the medical process auxiliary management system based on the core algorithm and the virtual reality technology, provides more comprehensive and accurate technical support for the medical process auxiliary management system based on the core algorithm and the virtual reality technology, the invention provides better decision support for a medical process auxiliary system of a safe, scientific and efficient virtual reality technology, ensures the safety of a patient in the medical process, combines a neural network and an optimization algorithm, provides a safe and high-timeliness medical process auxiliary management system based on a core algorithm and the virtual reality technology for people, can also consolidate the development of other application fields, lays a solid foundation for the development of fusion of multiple fields in the era of development of virtual reality and artificial intelligence ancient cooking, can be applied to multiple industries and fields in the market, provides a new development direction for the technology fusion of virtual reality and artificial intelligence, contributes important application value for the big data era, expands the application field, and is used for information synchronization, the image similarity comparison and the virtual reality medical effect prediction have obvious effects.
Disclosure of Invention
In view of the foregoing, the present invention aims to provide a medical procedure auxiliary management system based on a core algorithm and virtual reality technology.
The aim of the invention is realized by the following technical scheme:
the medical process auxiliary management system based on the core algorithm and the virtual reality technology comprises a patient individual image acquisition module, a virtual reality environment module and an auxiliary diagnosis module, wherein the patient individual image acquisition module comprises a patient local tissue image acquisition unit and a patient local X-ray image acquisition unit, the patient local tissue image acquisition unit is used for shooting human body local tissues to generate a two-dimensional image, the patient local X-ray image acquisition unit is used for acquiring X-ray images corresponding to the local tissues of a patient, the virtual reality environment module comprises an information synchronization unit, a data conversion unit, a patient image similarity comparison unit and a neural network prediction unit, the information synchronization unit synchronizes patient image information by adopting an M-ASA algorithm, the data conversion unit converts electronic signals generated by a computer into image forms which can be perceived by human sense organs through various output devices, the patient image similarity comparison unit carries out similarity comparison on the patient images by adopting an S-ADH algorithm and finds out marking on difference parts, and the neural network prediction unit carries out effect prediction on the virtual reality process by adopting a QGA-LVQ neural network, and the auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis.
Further, the local tissue image acquisition unit of the patient acquires a local tissue image of the patient through the CCD camera and generates a two-dimensional image.
Further, the patient local X-ray image acquisition unit performs X-ray scanning on the patient through an X-ray machine to acquire an X-ray image of the patient local tissue.
Furthermore, the information synchronization unit performs information synchronization between the real environment and the virtual reality environment on the image information of the patient by adopting an M-ASA algorithm, so that the image information of the patient in the virtual reality environment is consistent with the image information acquired in the real environment.
Further, the M-ASA algorithm is specifically as follows: assuming that the coordinates of the transmission node A and the transmission node B are respectively in two dimensions
Figure BDA0004033854670000031
The distance between the transmission node a and the transmission node B is
Figure BDA0004033854670000032
Figure BDA0004033854670000033
When the speed of the transmission node A and the transmission node B does not change in the synchronous process, the relative speed vector between the transmission node A and the transmission node B is as follows
Figure BDA0004033854670000034
The transmission node A has a fixed time slot for transmitting patient image data packets during a period of length T, so that the relative displacement between the two transmission nodes during the two reception of data packets by the transmission node B is +.>
Figure BDA0004033854670000035
When a transmission node B receives a packet from a transmission node a in three consecutive periods, the distance vectors between the transmission node a and the transmission node B are +. >
Figure BDA0004033854670000036
T A1 ,T A2 ,T A3 Clock counter, T when sending packets for transmission node A B1 ,T B2 ,T B3 The clock counter when transmitting the packet for the transmission node B, the relation between the transmission node A and the transmission node B is
Figure BDA0004033854670000037
The transmitting node a has a fixed time slot for transmitting packets in each period, i.e
Figure BDA0004033854670000038
Thus, the transmitting node B records its own clock counter for calculating the propagation time difference over three consecutive periods, i.e.>
Figure BDA0004033854670000039
Considering the propagation speed in the information transmission process as c, there are
Figure BDA00040338546700000310
When the transmission node B receives the patient image data packet from the transmission node A for the second time, the distance between the transmission node A and the transmission node B is obtained as +.>
Figure BDA00040338546700000311
Solving as->
Figure BDA00040338546700000312
wherein ,/>
Figure BDA00040338546700000313
The propagation time difference between the third and fourth period is Δt 3 Then->
Figure BDA00040338546700000314
One of the two solutions is to satisfy +.>
Figure BDA00040338546700000315
The time difference between two time slots respectively used by the transmission node A and the transmission node B is delta T, and after the calculation of the distance vector is completed, when the clock counter of the transmission node B is T B4 When the transmission node B sends a packet to request round trip time correction from transmission node A, when the clock counter of transmission node A is T A4 At the time, the transmission node A transmits fromThe transmission node B receives the packet, when the clock counter of the transmission node A is T A5 When the packet is replied to the transmitting node B, the transmitting node B replies to the packet at T B5 The reply is received and calculated by: />
Figure BDA0004033854670000041
wherein ,/>
Figure BDA0004033854670000042
Information synchronization between the real environment and the virtual reality environment is carried out on the image information of the patient through an M-ASA algorithm to reduce information synchronization errors, namely +.>
Figure BDA0004033854670000043
Further, the data conversion unit converts the computer-generated electronic signals into image forms capable of being perceived by human sense organs through various output devices through the virtual reality device.
Furthermore, the patient image similarity comparison unit performs similarity comparison on the patient image by adopting an S-ADH algorithm, and finds out a difference part to mark.
Further, the S-ADH algorithm is specifically as follows: assume that a given set of training data x= { X 1 ,x 2 ,…,x n Required to learn a set of binary codes b= { B that are compact and that maintain semantic similarity between data well 1 ,b 2 ,…,b n At the same time, a valid hash function is required
Figure BDA0004033854670000044
To encode the patient image into Hamming space, where W is a model parameter, to train the deep hash model, first a binary code is generated for the training data that maintains similarity, with the graph hash problem +.>
Figure BDA0004033854670000045
s.t.B∈{-1,1} r×n Where L is the graph laplace operator, i.e. l=diag (A1) -a, affinity matrix entry a ij Representing pairs of data (x) in an input feature space i ,x j ) The similarity between the two, A1 is the sum of the returned A column and the whole column vector 1, the last two respectively force the learned binary codes to be uncorrelated and balanced, and the overall training process of the S-ADH algorithm comprises three parts, namely deep hash function training, similarity map updating and binary code optimization, firstly a deep hash function training part, and the specific process is as follows: training a deep hash model using Euclidean loss layer to measure the difference between the output of the depth model and the binary code learned during the last iteration, i.e +.>
Figure BDA0004033854670000049
Wherein w= { W l Is a parameter of a deep network architecture, W l For the weight parameters in each layer, a pre-trained model on a large-scale ImageNet dataset is used as the initialization of a deep hash function, the pre-trained model is finely tuned through the hash problems of different datasets, and the parameters { Wl } of the proposed model are optimized through standard back propagation and random gradient descent; then the similarity graph updating part comprises the following specific processes: by encoding the image using this depth model, a more powerful depth representation is obtained, updated to the pairwise similarity map
Figure BDA0004033854670000046
wherein ,/>
Figure BDA0004033854670000047
In order to eliminate the model feature of the last fully connected layer, sigma is the bandwidth parameter, then the updated similarity graph is carried into the next binary code optimization step, and finally the binary code optimization part is carried out, wherein the specific process is as follows: the graph hash problem is represented by L (B), namely minL (B), s.t.B.epsilon.S b ,B∈S p, wherein ,Sb Is [ -1.1] r×n ,S p Is->
Figure BDA0004033854670000048
Binary code optimization is performed based on ADMM algorithm, and two binary codes are introducedAuxiliary variables Z 1 and Z2 Respectively absorb S b and Sp Is constrained by (i) the graph hash problem restates to Z 1 =B,Z 1 ∈S b and Z2 =B,Z 2 ∈S p Solving according to an ADMM algorithm, wherein the following formula is as follows:
Figure BDA0004033854670000051
Figure BDA0004033854670000052
wherein ,δS (Z) is an indication function, Y 1 and Y2 As dual variable ρ 1 and ρ2 To penalty variables, the update of code B is as follows: in iteration (k+1), all variables except B are fixed, B k+1 Updating is performed according to the following formula, i.e.)>
Figure BDA0004033854670000053
wherein ,/>
Figure BDA0004033854670000054
Updated target gradient of 2BL+μ 1 (BB T -nI r )B+μ 2 B11 T +(ρ 12 ) B+G, auxiliary variable Z 1 and Z2 The updates of (2) are as follows: b (B) k+1 and />
Figure BDA0004033854670000055
Fixing (I)>
Figure BDA0004033854670000056
Updating is performed according to the following formula, i.e.)>
Figure BDA0004033854670000057
Figure BDA0004033854670000058
Solving by near-end minimization method, i.e. +.>
Figure BDA0004033854670000059
Wherein, pi is a projection operator, +.>
Figure BDA00040338546700000510
Updating is performed according to the following formula, i.e.)>
Figure BDA00040338546700000511
Will->
Figure BDA00040338546700000512
Is transformed into->
Figure BDA00040338546700000513
Dual variable Y 1 and Y2 Updating by gradient ramp up: />
Figure BDA00040338546700000514
Figure BDA00040338546700000515
Wherein, gamma is a parameter for accelerating convergence, and the similarity comparison analysis is carried out on the patient image through the training of the S-ADH algorithm.
Furthermore, the neural network prediction unit predicts the effect of the virtual reality medical process by adopting a QGA-LVQ neural network, and detects the medical process effect of the image analysis of the patient in the virtual reality environment.
Further, the QGA-LVQ neural network is specifically as follows: in two-dimensional complex vector space, two different states of a qubit are defined as |0>And |1>The state of the qubit is also a superposition of two states, and is expressed as the minimum information unit
Figure BDA00040338546700000516
Wherein τ and ν are complex numbers representing the associated probability amplitude and satisfy τ 22 In the QGA algorithm, chromosomes use quantum bit and quantum superposition state encoding, each quantum chromosome encoding expressed as
Figure BDA00040338546700000517
Wherein t is population algebra, and the quantum population of the t th generation is expressed as +.>
Figure BDA00040338546700000518
m is the quantum number, n is the population size, the quantum bit uses quantum gate to perform matrix transformation to complete state transition and realize population evolution, and the quantum bit operation uses quantum turngate, namely->
Figure BDA00040338546700000519
The population evolution process is expressed as
Figure BDA00040338546700000520
wherein ,θi Is the rotation angle theta i According to the regulation rules, the quantum crossover is crossed based on the full interference of the coherent nature of quanta, each quantum chromosome in the population needs to be crossed, and during quantum mutation, a quantum mutation operator U (omega (delta theta) i ) To achieve updating and optimization, i.e
Figure BDA0004033854670000061
ω(△θ i )=f(τ i ,v i )*△θ i Wherein f (τ i ,v i ) Is the rotation direction, delta theta i For the rotation size, delta is the adjustment factor; the structure of the LVQ neural network is divided into three layers, namely an input layer, a competition layer and an output layer, and the LVQ neural network is calculated as follows: first, the weight W between the initialization input layer and the competition layer is set ij And the learning rate η, then the input vector p= (P 1 ,p 2 ,…,p R ) T Input to the output layer, where R is the number of input elements and calculates the distance d between competing layer neurons and the input vector, i.e.>
Figure BDA0004033854670000062
Wherein i=1, 2, …, S l ,S l To compete for the number of neurons, then select distance inputsThe competitive layer neuron with the shortest vector is added when d j When the minimum time is reached, the class label of the connected input layer neuron is marked as C j The class label corresponding to the input vector is then set to C x When C j =C x The weight is updated to W ij-new =W ij-old +η(x-W ij-old ) When C j ≠C x When the weight is updated to W ij-new =W ij-old -η(x-W ij-old ) Finally, training the LVQ neural network circularly until the error precision reaches the requirement;
the QGA algorithm and the LVQ neural network are combined into a QGA-LVQ neural network, the main idea is that firstly, an initial value is selected for the LVQ neural network through the QGA algorithm, and then the total body gradually converges to an optimal solution so as to further improve classification accuracy, and the prediction of the QGA-LVQ neural network is specifically as follows: first, the genetic algebra t is set to 0, and population Q (t 0 ) Observing each individual of the initial population once to obtain a state o (t), and then determining an fitness function of the QGA-LVQ neural network from the distances, using the average distance between the random individuals in the population and the sample points in the input layer to calculate the fitness function, i.e
Figure BDA0004033854670000063
wherein ,Ft N is an element set belonging to k classes k For the number of k-class elements, x j For the input vector of training samples of the LVQ neural network, the fitness of the random individuals is calculated by the following formula, namely +.>
Figure BDA0004033854670000064
The termination condition of the iterative calculation is then +.>
Figure BDA0004033854670000065
N is the number of input vectors of samples, each individual in the group Q (t) is observed once, the fitness of each state is calculated, the group is updated through a quantum rotating gate, so that random individuals and the health condition of the random individuals are recorded until the error precision reaches the requirement to terminate iteration, and the random individuals are subjected to the analysis by a QGA-LVQ nerveThe training process of the network predicts the effect of the virtual reality medical process.
Furthermore, the auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis, analyzing image data according to the auxiliary effect of the medical process in the virtual reality environment, and realizing auxiliary diagnosis of the patient image.
The invention has the beneficial effects that: the invention combines the virtual reality technology, the information synchronization technology, the optimization algorithm, the image similarity contrast algorithm and the neural network structure, effectively improves the performance of the medical process auxiliary management system based on the core algorithm and the virtual reality technology, provides a better scheme for the medical process auxiliary management system based on the core algorithm and the virtual reality technology, acquires the local tissue image of a patient through a CCD camera, acquires the X-ray image of the local tissue of the patient through an X-ray machine, combines the information technology, synchronizes the image information of the patient through an M-ASA algorithm, performs similarity contrast on the image of the patient through an S-ADH algorithm and marks the difference part, predicts the effect of the virtual reality medical process through a QGA-LVQ neural network, rapidly realizes intelligent image diagnosis, and can trace all data, effectively improves the working effect of the medical process auxiliary management system based on the core algorithm and the virtual reality technology, provides a more comprehensive and accurate technical support for the medical process auxiliary management system based on the core algorithm and the virtual reality technology, provides a better decision support for the medical process auxiliary system of the safe, scientific and efficient virtual reality technology, ensures the medical process auxiliary system based on the virtual reality technology, simultaneously fuses the virtual reality algorithm and the virtual reality technology with the virtual reality technology based on the virtual reality technology in the field of the solid-oriented development of the virtual reality technology, and the virtual reality technology based on the virtual reality technology of the virtual reality technology and the virtual reality technology, and the virtual reality technology of the virtual reality technology and the virtual reality technology based on the virtual reality technology and the virtual reality technology, provides a new development direction for the technology fusion of virtual reality and artificial intelligence, and contributes to important application value in big data age.
Drawings
The invention will be further described with reference to the accompanying drawings, in which embodiments do not constitute any limitation on the invention, and other drawings can be obtained by one of ordinary skill in the art without undue effort from the following drawings.
Fig. 1 is a schematic diagram of the structure of the present invention.
Detailed Description
The invention will be further described with reference to the following examples.
The medical process auxiliary management system based on the core algorithm and the virtual reality technology comprises a patient individual image acquisition module, a virtual reality environment module and an auxiliary diagnosis module, wherein the patient individual image acquisition module comprises a patient local tissue image acquisition unit and a patient local X-ray image acquisition unit, the patient local tissue image acquisition unit is used for shooting human body local tissues to generate a two-dimensional image, the patient local X-ray image acquisition unit is used for acquiring X-ray images corresponding to the local tissues of a patient, the virtual reality environment module comprises an information synchronization unit, a data conversion unit, a patient image similarity comparison unit and a neural network prediction unit, the information synchronization unit synchronizes patient image information by adopting an M-ASA algorithm, the data conversion unit converts electronic signals generated by a computer into image forms which can be perceived by human sense organs through various output devices, the patient image similarity comparison unit carries out similarity comparison on the patient images by adopting an S-ADH algorithm and finds out marking on difference parts, and the neural network prediction unit carries out effect prediction on the virtual reality process by adopting a QGA-LVQ neural network, and the auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis.
Preferably, the patient local tissue image acquisition unit acquires a local tissue image of the patient by a CCD camera and generates the two-dimensional image.
Preferably, the patient partial X-ray image acquisition unit acquires an X-ray image of a patient partial tissue by X-ray scanning of the patient by an X-ray machine.
Preferably, the information synchronization unit performs information synchronization between the real environment and the virtual reality environment on the patient image information by using an M-ASA algorithm, so that the patient image information in the virtual reality environment is consistent with the image information acquired in the real environment.
Specifically, the M-ASA algorithm is as follows: assuming that the coordinates of the transmission node A and the transmission node B are respectively in two dimensions
Figure BDA0004033854670000081
The distance between transmission node a and transmission node B is +.>
Figure BDA0004033854670000082
When the speed of the transmission node A and the transmission node B does not change in the synchronization process, the relative speed vector between the transmission node A and the transmission node B is +.>
Figure BDA0004033854670000083
The transmission node A has a fixed time slot for transmitting patient image data packets during a period of length T, so that the relative displacement between the two transmission nodes during the two reception of data packets by the transmission node B is +.>
Figure BDA0004033854670000084
When a transmission node B receives a packet from a transmission node a in three consecutive periods, the distance vectors between the transmission node a and the transmission node B are +. >
Figure BDA0004033854670000085
T A1 ,T A2 ,T A3 Clock counter, T when sending packets for transmission node A B1 ,T B2 ,T B3 The clock counter when transmitting the packet for the transmission node B, the relation between the transmission node A and the transmission node B is
Figure BDA0004033854670000086
The transmitting node a has a fixed time slot for transmitting packets in each period, i.e
Figure BDA0004033854670000087
Thus, the transmitting node B records its own clock counter for calculating the propagation time difference over three consecutive periods, i.e.>
Figure BDA0004033854670000088
Considering the propagation speed in the information transmission process as c, there are
Figure BDA0004033854670000089
When the transmission node B receives the patient image data packet from the transmission node A for the second time, the distance between the transmission node A and the transmission node B is obtained as +.>
Figure BDA00040338546700000810
Solving as->
Figure BDA00040338546700000811
wherein ,/>
Figure BDA00040338546700000812
The propagation time difference between the third and fourth period is Δt 3 Then->
Figure BDA0004033854670000091
One of the two solutions is to satisfy +.>
Figure BDA0004033854670000092
The time difference between two time slots respectively used by the transmission node A and the transmission node B is delta T, and after the calculation of the distance vector is completed, when the clock counter of the transmission node B is T B4 When the transmission node B sends a packet to request round trip time correction from transmission node A, when the clock counter of transmission node A is T A4 When the transmission node A receives the packet from the transmission node B, the clock counter of the transmission node A is T A5 When the packet is replied to the transmitting node B, the transmitting node B replies to the packet at T B5 The reply is received and calculated by: />
Figure BDA0004033854670000093
wherein ,/>
Figure BDA0004033854670000094
Information synchronization between the real environment and the virtual reality environment is carried out on the image information of the patient through an M-ASA algorithm to reduce information synchronization errors, namely +.>
Figure BDA0004033854670000095
Due to the fast synchronization process, the relative velocity vector between the transmitting node a and the transmitting node B remains unchanged in most cases, and if the relative velocity vector varies very much during the synchronization process, the motion information in the packet may be very different from the average velocity, in which case the synchronization process will be restarted, in the M-ASA algorithm only the relative velocity vector needs to be known>
Figure BDA0004033854670000096
And a transmission interval T, and then, recording a clock counter when the transmission node B receives the data packet from the transmission node A in four continuous periods, and obtaining a distance vector between the transmission node A and the transmission node B to complete advanced synchronization, wherein the transmission node in the network only needs to periodically execute a synchronization process after a few periods due to high synchronization accuracy of the M-ASA algorithm so as to check and maintain a stable synchronization state of the network.
Preferably, the data conversion unit converts the computer-generated electronic signals into image forms perceivable by human sense organs through a variety of output devices through a virtual reality device.
Preferably, the patient image similarity comparison unit performs similarity comparison on the patient image by adopting an S-ADH algorithm and finds out the difference part for marking.
Specifically, the S-ADH algorithm is specifically as follows: assume that a given set of training data x= { X 1 ,x 2 ,…,x n Required to learn a set of binary codes b= { B that are compact and that maintain semantic similarity between data well 1 ,b 2 ,…,b n At the same time, a valid one is requiredHash function
Figure BDA0004033854670000097
To encode patient images into Hamming space, where W is a model parameter, in order to achieve a better mapping from data to binary code, a deep CNN model is used here for the hash function, which has a large performance gain, and in order to train the deep hash model, first a binary code is generated for training data that maintains similarity, whose graph hashing problem is->
Figure BDA0004033854670000098
s.t.B∈{-1,1} r×n Where L is the graph laplace operator, i.e. l=diag (A1) -a, affinity matrix entry a ij Representing pairs of data (x) in an input feature space i ,x j ) The similarity between the two, A1 is the sum of the returned A column and the whole column vector 1, the last two respectively force the learned binary codes to be uncorrelated and balanced, and the overall training process of the S-ADH algorithm comprises three parts, namely deep hash function training, similarity map updating and binary code optimization, firstly a deep hash function training part, and the specific process is as follows: training a deep hash model using Euclidean loss layer to measure the difference between the output of the depth model and the binary code learned during the last iteration, i.e +. >
Figure BDA00040338546700001016
Wherein w= { W l Is a parameter of a deep network architecture, W l For the weight parameters in each layer, a pre-trained model on a large scale ImageNet dataset is used as an initialization for the deep hash function and is fine-tuned by the hash problem of the different datasets, the parameters { W) of the proposed model are optimized by standard back propagation and random gradient descent (SGD) l -a }; then the similarity graph updating part comprises the following specific processes: once the depth network is trained, a hash function is obtained, along with a feature representation model, by using this depth model to encode the image, a more powerful depth representation is obtained, and the pair-wise similarity map is updated as
Figure BDA0004033854670000101
wherein ,/>
Figure BDA0004033854670000102
In order to get rid of the model features of the last fully connected layer, sigma is the bandwidth parameter, then the updated similarity graph is brought into the next binary code optimization step to serve as a bridge between the depth hash model and the binary code, the similarity graph is updated to make them more compatible, and at the same time, the semantic structure of the image with the output depth representation is captured more effectively; finally, a binary code optimizing part comprises the following specific processes: the graph hash problem is represented by L (B), namely minL (B), s.t.B.epsilon.S b ,B∈S p, wherein ,Sb Is [ -1.1] r×n ,S p Is->
Figure BDA0004033854670000103
Binary code optimization is performed based on ADMM algorithm, and two auxiliary variables Z are introduced 1 and Z2 Respectively absorb S b and Sp Is constrained by (i) the graph hash problem restates to Z 1 =B,Z 1 ∈S b and Z2 =B,Z 2 ∈S p Solving according to an ADMM algorithm, wherein the following formula is as follows:
Figure BDA0004033854670000104
Figure BDA0004033854670000105
wherein ,δS (Z) is an indication function, if Z ε S, output 0, otherwise output + -infinity, Y 1 and Y2 As dual variable ρ 1 and ρ2 To penalty variables, after the ADMM process, the original variables B, Z are iteratively updated by minimizing the amount of augmented Lagrangian 1 ,Z 2 Then, a gradient up is performed on the dual problem to update Y 1 ,Y 2 The update of code B is as follows: in the (k+1) th iteration, all variables except B are fixed,B k+1 updating is performed according to the following way, namely
Figure BDA0004033854670000106
wherein ,/>
Figure BDA0004033854670000107
Updated target gradient of 2BL+μ 1 (BB T -nI r )B+μ 2 B11 T +(ρ 12 ) B+G, auxiliary variable Z 1 and Z2 The updates of (2) are as follows: b (B) k+1 and />
Figure BDA0004033854670000108
The fixing device is used for fixing the fixing device,
Figure BDA0004033854670000109
updating is performed according to the following formula, i.e.)>
Figure BDA00040338546700001010
Solving by near-end minimization method, i.e. +.>
Figure BDA00040338546700001011
Wherein, pi is a projection operator, +.>
Figure BDA00040338546700001012
Updating is performed according to the following formula, i.e.)>
Figure BDA00040338546700001013
Will->
Figure BDA00040338546700001014
Is transformed into->
Figure BDA00040338546700001015
Dual variable Y 1 and Y2 Updating by gradient ramp up: />
Figure BDA0004033854670000111
Figure BDA0004033854670000112
Where γ is a parameter that accelerates convergence, optimization is performed alternately for each variable until convergence, and the overall training process of the S-ADH algorithm includes three parts that all contribute to learning better binary codes and capturing intrinsic data similarity: the hash function training step minimizes the difference between the learned code and the hash function output, the graph updating step calculates a more accurate similarity graph with updated depth representation, the binary code optimizing step learns more effective hash codes based on the updated graph matrix, and the similarity comparison analysis is performed on the patient image through training of an S-ADH algorithm.
Preferably, the neural network prediction unit predicts the effect of the virtual reality medical process by adopting a QGA-LVQ neural network, and detects the medical process effect of the image analysis of the patient in the virtual reality environment.
Specifically, the QGA-LVQ neural network is specifically as follows: in two-dimensional complex vector space, two different states of a qubit are defined as |0>And |1>The state of the qubit is also a superposition of two states, and is expressed as the minimum information unit
Figure BDA0004033854670000113
Wherein τ and ν are complex numbers representing the associated probability amplitude and satisfy τ 22 In the QGA algorithm, chromosomes use quantum bit and quantum superposition state encoding, each quantum chromosome encoding expressed as
Figure BDA0004033854670000114
Wherein t is population algebra, and the quantum population of the t th generation is expressed as +.>
Figure BDA0004033854670000115
m is the quantum number, n is the population size, the quantum bit uses quantum gate to perform matrix transformation to complete state transition and realize population evolution, and the quantum bit operation uses quantum turngate, namely->
Figure BDA0004033854670000116
The population evolution process is expressed as
Figure BDA0004033854670000117
wherein ,θi Is the rotation angle theta i According to the regulation rules, the quantum crossover is crossed based on the full interference of the coherent nature of quanta, each quantum chromosome in the population needs to be crossed, and during quantum mutation, a quantum mutation operator U (omega (delta theta) i ) To achieve updating and optimization, i.e
Figure BDA0004033854670000118
Figure BDA0004033854670000119
ω(△θ i )=f(τ ii )*△θ i Wherein f (τ ii ) Is the rotation direction, delta theta i For the rotation size, delta is the adjustment factor; the structure of the LVQ neural network is divided into three layers, namely an input layer, a competition layer and an output layer, and the LVQ neural network is calculated as follows: first, the weight W between the initialization input layer and the competition layer is set ij And the learning rate η, then the input vector p= (P 1 ,p 2 ,…,p R ) T Input to the output layer, where R is the number of input elements and calculates the distance d between competing layer neurons and the input vector, i.e.>
Figure BDA00040338546700001110
Wherein i=1, 2, …, S l ,S l To compete for the number of neurons, then select the competing layer neuron with the shortest distance input vector, when d j When the minimum time is reached, the class label of the connected input layer neuron is marked as C j The class label corresponding to the input vector is then set to C x When C j =C x The weight is updated to W ij-new =W ij-old +η(x-W ij-old ) When C j ≠C x When the weight is updated to W ij-new =W ij-old -η(x-W ij-old ) Finally, training the LVQ neural network circularly until the error precision reaches the requirement;
the QGA algorithm and the LVQ neural network are combined into the QGA-LVQ neural network, the main idea is that firstly, an initial value is selected for the LVQ neural network through the QGA algorithm, and then the total body gradually converges to an optimal solution, so that the classification precision is further improved, the classification accuracy is the premise of medical prediction, and the prediction of the QGA-LVQ neural network is specifically as follows: first, the genetic algebra t is set to 0, and population Q (t 0 ) Observing each individual of the initial population once to obtain a state o (t), and then determining an fitness function of the QGA-LVQ neural network from the distances, using the average distance between the random individuals in the population and the sample points in the input layer to calculate the fitness function, i.e
Figure BDA0004033854670000121
wherein ,Ft N is an element set belonging to k classes k For the number of k-class elements, x j For the input vector of training samples of the LVQ neural network, the fitness of the random individuals is calculated by the following formula, namely +.>
Figure BDA0004033854670000122
The termination condition of the iterative calculation is then +.>
Figure BDA0004033854670000123
N is the number of input vectors of samples, each individual in the group Q (t) is observed once, the fitness of each state is calculated, the group is updated through a quantum rotating gate, so that random individuals and the health condition of the random individuals are recorded, iteration is stopped until the error precision reaches the requirement, and the effect prediction is carried out on the virtual reality medical process through the training process of the QGA-LVQ neural network.
Preferably, the auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis, analyzing image data according to the auxiliary effect of the medical process in the virtual reality environment, and realizing auxiliary diagnosis of the patient image.
The method comprises the steps of collecting local tissue images of a patient through a CCD camera, collecting X-ray images of the local tissue of the patient through an X-ray machine, synchronizing the image information of the patient by adopting an M-ASA algorithm and an S-ADH algorithm, comparing the similarity of the patient images and finding out difference parts to mark, predicting the effect of a virtual reality medical process by adopting a QGA-LVQ neural network, quickly realizing intelligent image diagnosis, effectively improving the working effect of a medical process auxiliary management system based on a core algorithm and a virtual reality technology, providing more comprehensive and accurate technical support for the medical process auxiliary management system based on the core algorithm and the virtual reality technology, providing better decision support for the medical process auxiliary system based on the virtual reality technology, guaranteeing the safety in the medical process of the patient, simultaneously, combining the neural network and an optimization algorithm, providing a solid foundation for development of other application times, integrating the development of virtual reality and the artificial intelligence, providing a plurality of application times of the virtual reality and the application of the virtual reality technology, and the important foundation for the development of the virtual reality market, and the development of the new application of the virtual reality technology.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (10)

1. The medical process auxiliary management system based on the core algorithm and the virtual reality technology is characterized by comprising a patient individual image acquisition module, a virtual reality environment module and an auxiliary diagnosis module, wherein the patient individual image acquisition module comprises a patient local tissue image acquisition unit and a patient local X-ray image acquisition unit, the patient local tissue image acquisition unit is used for shooting a human body local tissue to generate a two-dimensional image, the patient local X-ray image acquisition unit is used for acquiring an X-ray image corresponding to the local tissue of a patient, the virtual reality environment module comprises an information synchronization unit, a data conversion unit, a patient image similarity comparison unit and a neural network prediction unit, the information synchronization unit synchronizes patient image information by adopting an M-ASA algorithm, the data conversion unit converts an electronic signal generated by a computer into an image form which can be perceived by human sense organs through various output devices, the patient image similarity comparison unit carries out similarity comparison on the patient image by adopting an S-ADH algorithm and finds out marking on a difference part, the neural network prediction unit carries out effect prediction on the virtual reality medical process by adopting a QGA-LVQ neural network, and the auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis.
2. The medical procedure auxiliary management system based on a core algorithm and a virtual reality technology according to claim 1, wherein the patient local tissue image acquisition unit acquires a local tissue image of the patient by a CCD camera and generates a two-dimensional image.
3. The medical procedure auxiliary management system based on a core algorithm and virtual reality technology according to claim 1, wherein the patient local X-ray image acquisition unit acquires an X-ray image of the patient local tissue by X-ray scanning of the patient by an X-ray machine.
4. The medical procedure auxiliary management system based on a core algorithm and a virtual reality technology according to claim 1, wherein the information synchronization unit performs information synchronization between a real environment and a virtual reality environment on patient image information by using an M-ASA algorithm, so that the patient image information in the virtual reality environment is consistent with image information acquired in the real environment.
5. Root of Chinese characterThe medical procedure auxiliary management system based on a core algorithm and virtual reality technology according to claim 4, wherein the M-ASA algorithm is specifically as follows: assuming that the coordinates of the transmission node A and the transmission node B are respectively in two dimensions
Figure FDA0004033854660000011
The distance between transmission node a and transmission node B is +.>
Figure FDA0004033854660000012
When the speed of the transmission node A and the transmission node B does not change in the synchronization process, the relative speed vector between the transmission node A and the transmission node B is +.>
Figure FDA0004033854660000013
The transmission node A has a fixed time slot for transmitting patient image data packets during a period of length T, so that the relative displacement between the two transmission nodes during the two reception of data packets by the transmission node B is +.>
Figure FDA0004033854660000014
When the transmission node B receives the packet from the transmission node a in three consecutive periods, the distance vectors between the transmission node a and the transmission node B are respectively
Figure FDA0004033854660000021
T A1 ,T A2 ,T A3 Clock counter, T when sending packets for transmission node A B1 ,T B2 ,T B3 The clock counter when transmitting packets for the transmitting node B, the relation between the transmitting node A and the transmitting node B is +.>
Figure FDA0004033854660000022
The transmission node A has a fixed time slot for transmitting packets in each period, i.e. +.>
Figure FDA0004033854660000023
Thus, the transmitting node B records its own clock counter for calculating the propagation time difference over three consecutive periods, i.e.>
Figure FDA0004033854660000024
Considering the propagation speed in the information transmission process as c, there is +.>
Figure FDA0004033854660000025
When the transmission node B receives the patient image data packet from the transmission node A for the second time, the distance between the transmission node A and the transmission node B is obtained as follows
Figure FDA0004033854660000026
Solving as->
Figure FDA0004033854660000027
wherein ,/>
Figure FDA0004033854660000028
The propagation time difference between the third and fourth period is Δt 3 Then->
Figure FDA0004033854660000029
One of the two solutions is to satisfy
Figure FDA00040338546600000210
The time difference between two time slots respectively used by the transmission node A and the transmission node B is delta T, and after the calculation of the distance vector is completed, when the clock counter of the transmission node B is T B4 When the transmission node B sends a packet to request round trip time correction from transmission node A, when the clock counter of transmission node A is T A4 When the transmission node A receives the packet from the transmission node B, the clock counter of the transmission node A is T A5 When the packet is replied to the transmitting node B, the transmitting node B replies to the packet at T B5 The reply is received and calculated by: />
Figure FDA00040338546600000211
wherein ,
Figure FDA00040338546600000212
information synchronization between the real environment and the virtual reality environment is carried out on the image information of the patient through an M-ASA algorithm to reduce information synchronization errors, namely +.>
Figure FDA00040338546600000213
6. The medical procedure auxiliary management system based on a core algorithm and virtual reality technology according to claim 1, wherein the data conversion unit converts the computer-generated electronic signals into image forms perceivable by human sense organs through various output devices through a virtual reality device.
7. The medical procedure auxiliary management system based on the core algorithm and the virtual reality technology according to claim 1, wherein the patient image similarity comparison unit performs similarity comparison on the patient image by using an S-ADH algorithm and finds out a difference part to mark.
8. The medical procedure auxiliary management system based on a core algorithm and virtual reality technology according to claim 7, wherein the S-ADH algorithm is specifically as follows: assume that a given set of training data x= { X 1 ,x 2 ,…,x n Required to learn a set of binary codes b= { B that are compact and that maintain semantic similarity between data well 1 ,b 2 ,…,b n At the same time, a valid hash function is required
Figure FDA0004033854660000031
To encode the patient image into Hamming space, where W is a model parameter, for training the deep hash model, first training dataGenerating a binary code maintaining similarity, wherein the graph hash problem is +.>
Figure FDA0004033854660000032
Where L is the graph laplace operator, i.e. l=diag (A1) -a, affinity matrix entry a ij Representing pairs of data (x) in an input feature space i ,x j ) The similarity between the two, A1 is the sum of the returned A column and the whole column vector 1, the last two respectively force the learned binary codes to be uncorrelated and balanced, and the overall training process of the S-ADH algorithm comprises three parts, namely deep hash function training, similarity map updating and binary code optimization, firstly a deep hash function training part, and the specific process is as follows: training a deep hash model using Euclidean loss layer to measure the difference between the output of the depth model and the binary code learned during the last iteration, i.e +. >
Figure FDA0004033854660000033
Wherein w= { W l Is a parameter of a deep network architecture, W l For the weight parameters in each layer, a pre-trained model on a large-scale ImageNet dataset is used as an initialization for the deep hash function, and is fine-tuned by the hash problem of the different datasets, the parameters { W) of the proposed model are optimized by standard back propagation and random gradient descent l -a }; then the similarity graph updating part comprises the following specific processes: by encoding the image using this depth model, a more powerful depth representation is obtained, the pairwise similarity map is updated +.>
Figure FDA0004033854660000034
wherein ,
Figure FDA0004033854660000035
in order to eliminate the model feature of the last fully connected layer, σ is the bandwidth parameter, then the updated similarity graph is brought into the next binary code optimization step, and finally the binary code optimization part is performed, the specific process is as follows: the graph hash problem is represented by L (B), namely minL (B), s.t.B.epsilon.S b ,B∈S p, wherein ,Sb Is [ -1.1] r×n ,S p Is that
Figure FDA0004033854660000036
Binary code optimization is performed based on ADMM algorithm, and two auxiliary variables Z are introduced 1 and Z2 Respectively absorb S b and Sp Is constrained by (i) the graph hash problem restates to Z 1 =B,Z 1 ∈S b and Z2 =B,Z 2 ∈S p Solving according to an ADMM algorithm, wherein the following formula is as follows:
Figure FDA0004033854660000037
Figure FDA0004033854660000038
wherein ,δS (Z) is an indication function, Y 1 and Y2 As dual variable ρ 1 and ρ2 To penalty variables, the update of code B is as follows: in iteration (k+1), all variables except B are fixed, B k+1 Updating is performed according to the following formula, i.e.)>
Figure FDA0004033854660000041
wherein ,/>
Figure FDA0004033854660000042
Updated target gradient of 2BL+μ 1 (BB T -nI r )B+μ 2 B11 T +(ρ 12 ) B+G, auxiliary variable Z 1 and Z2 The updates of (2) are as follows: b (B) k+1 and />
Figure FDA0004033854660000043
The fixing device is used for fixing the fixing device,
Figure FDA0004033854660000044
according to the following stepsLine updates, i.e.)>
Figure FDA0004033854660000045
Figure FDA0004033854660000046
Solving by near-end minimization method, i.e. +.>
Figure FDA0004033854660000047
Wherein pi is projection calculation ++>
Figure FDA0004033854660000048
Figure FDA0004033854660000049
Updating is performed according to the following formula, i.e.)>
Figure FDA00040338546600000410
Will->
Figure FDA00040338546600000411
Is transformed into->
Figure FDA00040338546600000412
Dual variable Y 1 and Y2 Updating by gradient ramp up: />
Figure FDA00040338546600000413
Figure FDA00040338546600000414
Wherein, gamma is a parameter for accelerating convergence, and the similarity comparison analysis is carried out on the patient image through the training of the S-ADH algorithm.
9. The medical procedure auxiliary management system based on the core algorithm and the virtual reality technology according to claim 1, wherein the neural network prediction unit predicts the effect of the virtual reality medical procedure by using a QGA-LVQ neural network, and detects the medical procedure effect of analyzing the patient image in the virtual reality environment.
10. The medical procedure auxiliary management system based on core algorithm and virtual reality technology according to claim 9, wherein the QGA-LVQ neural network is specifically as follows: in two-dimensional complex vector space, two different states of a qubit are defined as |0 >And |1>The state of the qubit is also a superposition of two states, and is expressed as the minimum information unit
Figure FDA00040338546600000415
Wherein τ and ν are complex numbers representing the associated probability amplitude and satisfy τ 22 In the QGA algorithm, chromosomes use quantum bit and quantum superposition state encoding, each quantum chromosome encoding expressed as
Figure FDA00040338546600000416
Wherein t is population algebra, and the quantum population of the t th generation is expressed as +.>
Figure FDA00040338546600000417
m is the quantum number, n is the population size, the quantum bit uses quantum gate to perform matrix transformation to complete state transition and realize population evolution, and the quantum bit operation uses quantum turngate, namely->
Figure FDA00040338546600000418
The population evolution process is expressed as
Figure FDA00040338546600000419
wherein ,θi Is the rotation angle theta i According to the adjustment rule, the quantum cross is based on the cross operation of the full interference of the coherent property of the quanta, each quanta chromosome in the population needs to be cross operated, and the quanta are protrudedIn the transformation process, a quantum mutation operator U (ω (Δθ) i ) To achieve updating and optimization, i.e
Figure FDA00040338546600000420
ω(△θ i )=f(τ ii )*△θ i Wherein f (τ ii ) Is the rotation direction, delta theta i For the rotation size, delta is the adjustment factor; the structure of the LVQ neural network is divided into three layers, namely an input layer, a competition layer and an output layer, and the LVQ neural network is calculated as follows: first, the weight W between the initialization input layer and the competition layer is set ij And the learning rate η, then the input vector p= (P 1 ,p 2 ,…,p R ) T Input to the output layer, where R is the number of input elements and calculates the distance d between competing layer neurons and the input vector, i.e.>
Figure FDA0004033854660000051
Wherein i=1, 2, …, S l ,S l To compete for the number of neurons, then select the competing layer neuron with the shortest distance input vector, when d j When the minimum time is reached, the class label of the connected input layer neuron is marked as C j The class label corresponding to the input vector is then set to C x When C j =C x The weight is updated to W ij-new =W ij-old +η(x-W ij-old ) When C j ≠C x When the weight is updated to W ij-new =W ij-old -η(x-W ij-old ) Finally, training the LVQ neural network circularly until the error precision reaches the requirement;
the QGA algorithm and the LVQ neural network are combined into a QGA-LVQ neural network, the main idea is that firstly, an initial value is selected for the LVQ neural network through the QGA algorithm, and then the total body gradually converges to an optimal solution so as to further improve classification accuracy, and the prediction of the QGA-LVQ neural network is specifically as follows: first, the genetic algebra t is set to 0, and population Q (t 0 ) Each individual of the initial population is observed once to obtain the state o (t)Then determining the fitness function of the QGA-LVQ neural network from the distances, calculating the fitness function using the average distance between the random individuals in the population and the sample points in the input layer, i.e
Figure FDA0004033854660000052
wherein ,Ft N is an element set belonging to k classes k For the number of k-class elements, x j For the input vector of training samples of the LVQ neural network, the fitness of the random individuals is calculated by the following formula, namely +.>
Figure FDA0004033854660000053
The termination condition of the iterative calculation is then +.>
Figure FDA0004033854660000054
N is the number of input vectors of samples, each individual in a group Q (t) is observed once, the fitness of each state is calculated, the group is updated through a quantum rotating gate, so that random individuals and the health condition of the random individuals are recorded, iteration is stopped until the error precision reaches the requirement, and the effect prediction is carried out on the virtual reality medical process through the training process of the QGA-LVQ neural network;
the medical process auxiliary management system based on the core algorithm and the virtual reality technology is characterized in that an auxiliary diagnosis module is used for rapidly realizing intelligent image diagnosis, analyzing image data according to the auxiliary effect of the medical process in the virtual reality environment and realizing auxiliary diagnosis of the patient image.
CN202211738137.8A 2022-12-31 2022-12-31 Medical process auxiliary management system based on core algorithm and virtual reality technology Pending CN116092644A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211738137.8A CN116092644A (en) 2022-12-31 2022-12-31 Medical process auxiliary management system based on core algorithm and virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211738137.8A CN116092644A (en) 2022-12-31 2022-12-31 Medical process auxiliary management system based on core algorithm and virtual reality technology

Publications (1)

Publication Number Publication Date
CN116092644A true CN116092644A (en) 2023-05-09

Family

ID=86213254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211738137.8A Pending CN116092644A (en) 2022-12-31 2022-12-31 Medical process auxiliary management system based on core algorithm and virtual reality technology

Country Status (1)

Country Link
CN (1) CN116092644A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116825293A (en) * 2023-08-25 2023-09-29 青岛市胶州中心医院 Visual obstetrical image examination processing method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116825293A (en) * 2023-08-25 2023-09-29 青岛市胶州中心医院 Visual obstetrical image examination processing method
CN116825293B (en) * 2023-08-25 2023-11-07 青岛市胶州中心医院 Visual obstetrical image examination processing method

Similar Documents

Publication Publication Date Title
Chen et al. Renas: Reinforced evolutionary neural architecture search
CN111353076B (en) Method for training cross-modal retrieval model, cross-modal retrieval method and related device
CN113299354B (en) Small molecule representation learning method based on transducer and enhanced interactive MPNN neural network
CN111666919B (en) Object identification method and device, computer equipment and storage medium
CN112951386B (en) Image-driven brain map construction method, device, equipment and storage medium
CN115080801B (en) Cross-modal retrieval method and system based on federal learning and data binary representation
WO2022242127A1 (en) Image feature extraction method and apparatus, and electronic device and storage medium
CN111210002B (en) Multi-layer academic network community discovery method and system based on generation of confrontation network model
Chen et al. Reinforced evolutionary neural architecture search
CN116738911B (en) Wiring congestion prediction method and device and computer equipment
CN113516181B (en) Characterization learning method for digital pathological image
Bai et al. Correlative channel-aware fusion for multi-view time series classification
CN114548428B (en) Intelligent attack detection method and device of federated learning model based on instance reconstruction
CN116092644A (en) Medical process auxiliary management system based on core algorithm and virtual reality technology
CN116932722A (en) Cross-modal data fusion-based medical visual question-answering method and system
CN111091010A (en) Similarity determination method, similarity determination device, network training device, network searching device and storage medium
CN111815593B (en) Pulmonary nodule domain adaptive segmentation method, device and storage medium based on countermeasure learning
CN117036760A (en) Multi-view clustering model implementation method based on graph comparison learning
Qin et al. PointSkelCNN: Deep Learning‐Based 3D Human Skeleton Extraction from Point Clouds
CN112201348B (en) Knowledge-aware-based multi-center clinical data set adaptation device
CN114782503A (en) Point cloud registration method and system based on multi-scale feature similarity constraint
CN117038096A (en) Chronic disease prediction method based on low-resource medical data and knowledge mining
CN116313058A (en) Facial paralysis intelligent assessment method, system, equipment and storage medium
CN117616467A (en) Method for training and using deep learning algorithm to compare medical images based on reduced dimension representation
CN113269083A (en) Unsupervised cross-domain crowd counting method based on density isomorphic reconstruction of error perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination