CN109984841B - System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model - Google Patents

System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model Download PDF

Info

Publication number
CN109984841B
CN109984841B CN201910307088.4A CN201910307088A CN109984841B CN 109984841 B CN109984841 B CN 109984841B CN 201910307088 A CN201910307088 A CN 201910307088A CN 109984841 B CN109984841 B CN 109984841B
Authority
CN
China
Prior art keywords
image
module
box
lower limb
image acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910307088.4A
Other languages
Chinese (zh)
Other versions
CN109984841A (en
Inventor
吴小玲
刘志鹏
王伟
李修寒
竺明月
王黎明
姚庆强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Medical University
Original Assignee
Nanjing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Medical University filed Critical Nanjing Medical University
Priority to CN201910307088.4A priority Critical patent/CN109984841B/en
Publication of CN109984841A publication Critical patent/CN109984841A/en
Application granted granted Critical
Publication of CN109984841B publication Critical patent/CN109984841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • Robotics (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to the technical field of total knee joint replacement, in particular to a system for intelligently eliminating osteophytes of lower limb bones by utilizing a generated confrontation network model. This utilize to generate system of antagonizing network model intelligence elimination lower limb bone image osteophyte, only need accomplish model training alright with reuse, the osteophyte in the lower limb bone image of accurate automatic elimination rapidly, help the doctor carries out the operation planning, easy operation, the degree of accuracy is high, satisfy the advantage of patient individual difference, simultaneously through the basis data of planning before the perfect art, guide operation planning and prosthesis selection, improve the rate of accuracy of follow-up operation, to the life of increase patient postoperative prosthesis, reduce postoperative complication, it has the important function to improve patient postoperative quality of life, through first step motor and the second step motor that sets up, can drive image sensor all-round removal in the image acquisition box, accomplish image sensor all-round collection.

Description

System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model
Technical Field
The invention relates to the technical field of total knee joint replacement, in particular to a system for intelligently eliminating osteophytes of lower limb bone images by utilizing a generated confrontation network model.
Background
The total knee joint replacement is made of materials such as metal, high polymer polyethylene, ceramics and the like, is made into an artificial knee joint prosthesis according to the shape, the structure and the functions of joints of a human body, is implanted into the human body through a surgical technology, relieves joint pain, corrects joint deformity, recovers the functions of the joints and improves the life quality of patients. The total knee replacement is commonly used for treating and improving serious knee pain, instability and deformity, such as rheumatoid arthritis, knee osteoarthritis, a few traumatic arthritis and other diseases, comprises two steps of osteotomy and soft tissue release, and finally achieves the purposes of recovering lower limb force lines, keeping soft tissue balance and achieving knee joint balance.
Firstly, in preoperative planning, the osteophyte has a large influence on the positioning of key marks such as a mechanical axis, a joint line, a femoral anteroposterior axis, an AP axis and the like, and the positioning mark deviation can be caused by misjudging the appearance and the position of the osteophyte, so that the function, the stability and the motion range of the knee joint are influenced, and postoperative pain is easily caused. Before the clinical total knee joint replacement, a doctor needs to plan and make a decision on an operation according to the image data of the lower limb bone without osteophyte, the doctor is better skilled in medical knowledge, and the image processing can be time-consuming and labor-consuming for the doctor. Experienced doctors can provide needs to guide and assist other technicians to perfectly eliminate osteophytes of lower limb bone images, but the experienced doctors are hard to operate because the experienced doctors cannot guide and perfectly eliminate osteophytes of lower limb bone images through medical experience while the experienced doctors are hard to operate. In view of this, we propose a system for intelligently eliminating osteophytes in bone images of lower limbs by generating an antagonistic network model.
Disclosure of Invention
The invention aims to provide a system for intelligently eliminating osteophytes of lower limb bone images by utilizing a generated confrontation network model, which saves a great deal of time for orthopedic doctors and also guides and assists doctors with shallow seniority.
In order to achieve the above object, in one aspect, the present invention provides a system for intelligently eliminating osteophytes from lower limbs by using a generated confrontation network model, comprising an image acquisition box and a box cover installed on the top of the image acquisition box, wherein the image acquisition box is a hollow box body with an opening at one end, an acquisition device for acquiring images is installed on the inner wall of the image acquisition box, the acquisition device comprises a pair of installation blocks with L-shaped cross sections, a first screw nut is installed on the inner wall of the installation blocks, a first ball screw is connected in the first screw nut through internal threads, a first stepping motor is installed at one end of the first ball screw, a second ball screw is arranged between the two installation blocks, a second screw nut is connected in the second ball screw through threads, a bearing block is installed on the outer wall of the second screw nut, and an image sensor is installed on the top of the bearing block, and a second stepping motor is installed at one end of the second ball screw.
Preferably, the inner wall of the image acquisition box is provided with a guide rail groove, and the mounting block is in sliding fit with the guide rail groove.
Preferably, the inner wall of the image acquisition box is provided with a glass plate, the outer wall of the glass plate is provided with an insertion block, one side, close to the glass plate, of the image acquisition box is provided with a slot, and the insertion block is in insertion fit with the slot.
Preferably, a pair of sliding grooves are formed in two sides of the top of the image acquisition box respectively, the two sliding grooves are separated by a baffle, the box cover is provided with a pair of sliding grooves, grooves are formed in two sides of the bottom of the box cover respectively, a fixed block and a moving block are arranged in the grooves respectively, the moving block is in sliding fit with the grooves, and springs are arranged between the fixed block and the moving block.
Preferably, the fixed block is close to one end of the baffle.
Preferably, the top of the box cover is provided with a buckling groove which is arc-shaped.
Preferably, the outer wall of the image acquisition box is further provided with an image processing system, and the image processing system comprises an image acquisition module for acquiring image data and an image training module for training the acquired image data.
Preferably, the image acquisition module comprises an image sensor module, an amplification and filtering module, an A/D conversion module, a stepping motor module and a signal processing module;
the image acquisition module is used for calling the image sensor to acquire an image;
the amplifying and filtering module is used for carrying out pre-inversion, filtering and amplifying processing on the acquired image signals;
the A/D conversion module is used for carrying out digital processing on the image signal;
the stepping motor module is used for controlling the rotating speed of the first stepping motor and the second stepping motor;
and the signal processing module is used for carrying out fixed-point processing on the converted digital signals.
Preferably, the image training module comprises a data collection module, an establishment generator module, a mapping sample module, an establishment discriminator module and an output result module;
the data collection module is used for collecting lower limb bone image data of osteophyte elimination and lower limb bone image data of non-osteophyte elimination;
the building generator module is used for inputting the lower limb bone image data without removing osteophyte into a generator G;
the mapping sample module is used for adopting a network structure of a multilayer perceptron, representing a derivable mapping G (z) by using parameters of MLP, and mapping an input space to a sample space;
the establishing discriminator module is used for inputting the lower limb bone image data for eliminating osteophyte and the sample G (z) mapped by the generator G into a discriminator D;
and the output result module is used for adopting the Sigmoid function to convert and representing the final judgment result of the discriminator D by 0 and 1.
Preferably, the operation steps of the system for intelligently eliminating the bone image osteophyte of the lower limb by utilizing the generated antagonistic network model are as follows:
s1, opening the box cover: an operator buckles the buckle grooves with two hands respectively and pulls the buckle grooves towards two sides with force, at the moment, the fixed block and the moving block both slide in the sliding groove, so that the two box covers move backwards on the image acquisition box until the moving block contacts the inner wall of one side of the sliding groove, at the moment, the box covers are continuously pushed, the moving block slides towards one side of the fixed block in the groove and extrudes the spring to contract, the box covers continue to move backwards on the image acquisition box until the box covers are completely opened, and at the moment, the image pictures can be placed on the glass plate from the two box covers;
s2, closing the box cover: after two hands are released, the fixing block is pushed out under the action of the elasticity of the spring, so that the fixing block moves towards one side of the baffle plate in the sliding groove until the fixing block abuts against the baffle plate, the two box covers are overlapped, and the image acquisition box is sealed;
s3, image acquisition: attaching an image picture with or without the lower limb bone osteophyte on a glass plate, switching on a power supply of an image sensor to enable the image sensor to work, collecting the image picture through the image sensor, simultaneously switching on the power supply of a first stepping motor to enable the first stepping motor to work, driving a first ball screw to rotate by the first stepping motor, screwing a first screw nut to perform linear motion on the first ball screw, further driving a mounting block to transversely move in a guide rail groove, switching on the power supply of a second stepping motor to enable the second stepping motor to work, driving a second ball screw to rotate by the second stepping motor, screwing the second screw nut to perform linear motion on the second ball screw, further driving the linear motion of the image sensor, and completing the all-directional collection of the image sensor;
s4, establishing a generator: inputting the image data without eliminating the lower limb osteophyte into a generator G;
s5, mapping sample: adopting a network structure of a multilayer perceptron, representing a guidable mapping G (z) by using parameters of MLP, and mapping an input space to a sample space;
s6, establishing a discriminator: inputting the image data for eliminating the lower limb osteophyte and the sample G (z) mapped by the generator G into a discriminator D;
s7, outputting a result: the final discrimination result of the discriminator D is expressed by "0" and "1" using the "Sigmoid function" transformation.
Compared with the prior art, the invention has the beneficial effects that:
1. in this utilize to generate system of confronting network model intelligence elimination lower limb bone image osteophyte, through the first step motor and the second step motor that set up, can drive image sensor all-round movement in image acquisition box, accomplish image sensor all-round collection, improve image picture information acquisition effect.
2. In this utilize to generate system of antagonizing network model intelligence elimination lower limb bone image osteophyte, under the effect through spring self elasticity, release the fixed block for the fixed block moves to baffle one side in the spout, supports to on the baffle until the fixed block, and two lid coincidences seal the image acquisition box, improve the shading effect of image acquisition box.
3. This utilize to generate system of antagonizing network model intelligence elimination lower limb bone image osteophyte, only need accomplish model training alright with reuse, the osteophyte in the lower limb bone image of accurate automatic elimination rapidly, help the doctor carries out the operation planning, easy operation, the degree of accuracy is high, satisfy the advantage of patient individual difference, simultaneously through the basis data of planning before the perfect art, guide operation planning and prosthesis selection, improve the rate of accuracy of follow-up operation, to the life who increases patient postoperative prosthesis, reduce postoperative complication, it has the important effect to improve patient postoperative life quality of life.
Drawings
FIG. 1 is a schematic view of the overall structure of the present invention;
FIG. 2 is a schematic view of the internal structure of the image capturing case according to the present invention;
FIG. 3 is a schematic view of the image capture box of the present invention;
FIG. 4 is a schematic view of the structure of the collecting device of the present invention;
FIG. 5 is a schematic view of the structure of the glass plate of the present invention;
FIG. 6 is a schematic structural diagram of an image capture box according to a second embodiment of the present invention;
FIG. 7 is a schematic view of the backside structure of the box cover of the present invention;
FIG. 8 is a front view of the lid of the present invention;
fig. 9 is a schematic view of the overall structure of an image capturing box according to embodiment 3 of the present invention;
FIG. 10 is a block diagram of an image processing system of the present invention;
FIG. 11 is a block diagram of an image acquisition module of the present invention;
FIG. 12 is a diagram of an image training module of the present invention;
FIG. 13 is a block diagram of the overall process of the image training module according to the present invention.
In the figure: 1. an image acquisition box; 11. a guide rail groove; 12. a slot; 13. a chute; 14. a baffle plate; 2. a box cover; 21. a groove; 22. a fixed block; 23. a moving block; 24. a spring; 25. buckling grooves; 3. a glass plate; 31. inserting a block; 4. a collection device; 41. mounting blocks; 42. a first lead screw nut; 43. a first ball screw; 44. a first stepper motor; 45. a second ball screw; 46. a second lead screw nut; 47. a bearing block; 48. an image sensor; 49. a second stepping motor; 5. an image processing system.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the equipment or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Example 1
In one aspect, the present invention provides a system for intelligently eliminating osteophytes of lower limbs by using a generated confrontation network model, as shown in fig. 1-5, the system comprises an image acquisition box 1 and a box cover 2 installed on the top of the image acquisition box 1, the image acquisition box 1 is a hollow box body with an opening at one end, an acquisition device 4 for acquiring images is installed on the inner wall of the image acquisition box 1, the acquisition device 4 comprises a pair of installation blocks 41 with an L-shaped section, a first screw nut 42 is installed on the inner wall of the installation block 41, a first ball screw 43 is connected to the first screw nut 42 through an internal thread, a first stepping motor 44 is installed at one end of the first ball screw 43, a second ball screw 45 is arranged between the two installation blocks 41, a second screw nut 46 is connected to the second ball screw 45 through a thread, a bearing block 47 is installed on the outer wall of the second screw nut 46, an image sensor 48 is mounted on the top of the receiving block 47, and a second stepping motor 49 is mounted on one end of the second ball screw 45.
In this embodiment, the inner wall of the image capturing box 1 is provided with a guide rail groove 11, the mounting block 41 is in sliding fit with the guide rail groove 11, and the L-shaped mounting block 41 is adopted, so that one side of the mounting block 41 can be inserted into the guide rail groove 11 to slide, and the other side of the mounting block 41 can be used for mounting the second stepping motor 49.
Further, glass board 3 is installed to 1 inner wall of image acquisition box, and the outer wall of glass board 3 is provided with inserted block 31, and image acquisition box 1 is close to 3 one sides of glass board and has seted up slot 12, and inserted block 31 and slot 12 are pegged graft and are cooperated, are convenient for fix glass board 3 in image acquisition box 1, place the image photo that needs to gather through glass board 3 simultaneously, do not influence image sensor 48's collection.
Specifically, one end of the first ball screw 43 is coaxially arranged with an output shaft of the first stepping motor 44, and the other end of the first ball screw 43 is rotatably connected to the inner wall of the guide rail groove 11 through a bearing, so that the first stepping motor 44 drives the first ball screw 43 to rotate in the guide rail groove 11.
In addition, one end of the second ball screw 45 is coaxially disposed with the output shaft of the second stepping motor 49, and the other end of the second ball screw 45 is rotatably connected to the outer wall of the mounting block 41 through a bearing, so that the second stepping motor 49 drives the second ball screw 45 to rotate on the mounting block 41.
When the system for intelligently eliminating the image osteophyte of the lower limb bone by using the generated confrontation network model is used for image acquisition, the image picture for eliminating the image osteophyte of the lower limb bone or the image picture without eliminating the image osteophyte of the lower limb bone is attached to the glass plate 3, the image sensor 48 is powered on at the moment to work, the image picture is acquired by the image sensor 48, meanwhile, the first stepping motor 44 is powered on to work, the first ball screw 43 is driven by the first stepping motor 44 to rotate, the first screw nut 42 is screwed to perform linear motion on the first ball screw 43, the mounting block 41 is further driven to transversely move in the guide rail groove 11, the second stepping motor 49 is powered on to work, the second ball screw 45 is driven by the second stepping motor 49 to rotate, and the second screw nut 46 is screwed to perform linear motion on the second ball screw 45, and then drive image sensor 48's linear motion, accomplish image sensor 48 all-round collection, improve image picture information acquisition effect.
Example 2
As a second embodiment of the present invention, in order to facilitate the image capturing box 1 to have a good light shielding effect when capturing an image picture, the present invention provides an improvement to the image capturing box 1, as shown in fig. 6 to 8, as a preferred embodiment, two sides of the top of the image capturing box 1 are respectively provided with a pair of sliding grooves 13, the two sliding grooves 13 are separated by a baffle 14, the box cover 2 is provided with a pair of sliding grooves, two sides of the bottom of the box cover 2 are respectively provided with a groove 21, the groove 21 is internally provided with a fixed block 22 and a moving block 23, the moving block 23 is in sliding fit with the groove 21, and a spring 24 is installed between the fixed block 22 and the moving block 23.
In this embodiment, the fixing block 22 is close to one end of the baffle 14, so that when the box cover 2 is opened, after the moving block 23 is blocked by the inner wall of the sliding groove 13, the moving block 23 can slide in the sliding groove 13, and the box cover 2 is ensured to be smoothly opened.
Further, a buckle groove 25 is formed in the top of the box cover 2, the buckle groove 25 is arc-shaped, the box cover 2 is convenient to move by pulling the buckle groove 25, and the arc-shaped buckle groove 25 accords with the ergonomic design.
Specifically, the fixed block 22 and the moving block 23 are in sliding fit with the sliding groove 13, and the fixed block 22 and the moving block 23 slide in the sliding groove 13, so that the box cover 2 is pushed open conveniently.
When the box cover 2 of the system for intelligently eliminating the bone osteophyte of the lower limb bone by utilizing the generated confrontation network model is used, two hands of an operator are respectively buckled in the buckling grooves 25 and are pulled towards two sides with force, at the moment, the fixing block 22 and the moving block 23 both slide in the sliding groove 13, so that the two box covers 2 move backwards on the image acquisition box 1 until the moving block 23 contacts the inner wall of one side of the sliding groove 13, at the moment, the box cover 2 is continuously pushed, the moving block 23 slides towards one side of the fixing block 22 in the groove 21 and extrudes the spring 24 to shrink, the box cover 2 continuously moves backwards on the image acquisition box 1 until the box cover 2 is completely opened, at the moment, image pictures can be put into the glass plate 3 from the two box covers 2, after the two hands are loosened, the fixing block 22 is pushed out under the elastic action of the spring 24, so that the fixing block 22 moves towards one side of the baffle 14 in the sliding groove 13, until the fixing block 22 abuts against the baffle 14, the two box covers 2 are overlapped, and the image acquisition box 1 is sealed.
Example 3
As a third embodiment of the present invention, in order to facilitate training of the acquired image data, the present invention further provides an image processing system 5, as shown in fig. 9-13, as a preferred embodiment, the image processing system 5 is further provided on an outer wall of the image acquisition box 1, the image processing system 5 includes an image acquisition module for acquiring the image data and an image training module for training the acquired image data, the image acquisition module includes an image sensor module, an amplification filtering module, an a/D conversion module, a stepping motor module and a signal processing module, the image acquisition module is used for calling the image sensor 48 to acquire an image, the amplification filtering module is used for performing forward reversal, filtering and amplification processing on the acquired image signal, the a/D conversion module is used for performing digital processing on the image signal, the stepping motor module is used for controlling the rotating speed of a first stepping motor 44 and a second stepping motor 49, the signal processing module is used for carrying out fixed-point processing on converted digital signals, the image training module comprises a data collecting module, a generator establishing module, a mapping sample module, a discriminator establishing module and an output result module, the data collecting module is used for collecting lower limb bone image data without osteophyte and lower limb bone image data without osteophyte, the generator establishing module is used for inputting the lower limb bone image data without osteophyte into a generator G, the mapping sample module is used for adopting a network structure of a multi-layer perceptron, the MLP parameters are used for expressing the conductible mapping G (z) and mapping the input space to the sample space, the discriminator establishing module is used for inputting the lower limb bone image data without osteophyte and the sample G (z) mapped by the generator G into a discriminator D, and the output result module is used for adopting the Sigmoid function to convert and representing the final judgment result of the discriminator D by 0 and 1.
In this embodiment, the image sensor 48 adopts a TCD1208AP linear array CCD as an image sensor, the TCD1208AP is a linear array CCD sensing chip produced by TOSHIBA corporation of japan, and has 2160 pixels, the size and the spacing of the pixels are 14 μm × 14 μm, the TCD1208AP has the characteristics of high sensitivity, low dark current and the like, the working voltage is single 5V, and the linear array CCD device is a two-phase output linear array CCD device, and meets the acquisition requirements of the present invention.
Furthermore, the characteristics of the output signal of CCD determine that it can not be directly sent to A/D converter, it must first be processed by a series of pre-processing on hardware to eliminate the interference caused by driving pulse and noise in the signal, so it needs to make the signal undergo the process of pre-inversion, filtering and amplification, in the embodiment, a CA3450 operational amplifier is used as the amplifying and filtering module to make inversion and amplification, and the output end of CA3450 is connected with a stage RC filter to filter out noise, the processed signal can be sent to A/D converter to make digital processing, the A/D conversion module selects the A/D conversion chip CA331 3318CE with 8-bit, high-speed and parallel flash structure, it can completely meet the working requirement of CCD (IMHz), the signal is converted into the corresponding digital quantity capable of reflecting the image gray scale change by A/D conversion technology, the measurement accuracy and resolution are improved, when the output of the CA3318CE is enabled, the A/D conversion result can be sent to the 8-bit data line, and thus, data can be written into the data memory SRAM on the premise that the data memory write permission and the address are enabled.
Specifically, the signal processing module can be a TMS320VC5402 processor, the TMS320VC5402 processor is a fixed-point digital signal processor, the system structure is a Harvard structure, the signal processing module has an advanced multi-bus structure, a 40-bit Arithmetic Logic Unit (ALU) comprises a 40-bit barrel-shaped shift register and two 40-bit accumulators, the data/program addressing space is 64K/IMB, a RAM with I6KB and a ROM with 4KB are arranged in the signal processing module, and two buffer serial ports are arranged in the signal processing module. In addition, the DMA mode and a plurality of on-chip peripherals are provided, and the operation speed is up to 100 MIPS.
In addition, the image training module is used for recording p as data required for training the anti-network model, including lower limb bone image data for largely eliminating osteophytedataAnd the lower limb bone image data without removing osteophyte is recorded as pz. For the generator G, we input the lower limb bone image data without removing osteophyte into the generator G, adopt the network structure of the multi-layer perceptron, and express the guidable map G (z) by the parameters of MLP, map the input space to the sample space for the discriminator D, which adopts the multi-layer perceptron with parameters, and records as D (x), the lower limb bone image data x with label real and the generator forged sample G (z) will be input, the real sample, and the lower limb bone image data x with removed osteophyte. For the output result, the final discrimination result of the discriminator will be expressed by "0" and "1" using the "Sigmoid function" transformation. The function V (G, D) represents the final optimization objective formula as follows:
Figure BDA0002030223000000101
in addition, the optimization process for generating the confrontation network model, which is the maximum and minimum optimization for generating the confrontation network model, is essentially two optimization problems of the generator G and the arbiter D, and we alternate between k steps of optimizing the arbiter D and one step of optimizing the generator G, and the final optimization objectives of the generator G and the arbiter D are respectively as follows:
optimization objective of discriminator D:
Figure BDA0002030223000000102
optimization objective of generator G:
Figure BDA0002030223000000103
it is to be noted that the optimization procedure for generating the countermeasure network discriminator D extracts m samples from the image data of the lower limb bone from which the osteophyte is removed, and simultaneously extracts m noise samples from the image data of the lower limb bone from which the osteophyte is not removed and sends the noise samples to the generator G to generate data
Figure BDA0002030223000000104
By the gradient ascent method and the parameters of the new iteration discriminator D,
Figure BDA0002030223000000105
to maximize
Figure BDA0002030223000000106
The process is repeated n times in one iteration of the optimization loop, ensuring that the cost function is maximized.
Further, the optimization process for generating the antagonistic network generator G additionally extracts m noise samples { z ] from the lower limb bone image data without osteophyte removal1,z2,...zmUpdating parameters of an iterative generator by a gradient descent method
Figure BDA0002030223000000107
To maximize
Figure BDA0002030223000000108
The process is repeated once in one iteration of the optimization loop, so that the JS divergence can be prevented from rising due to too many updates.
In another aspect, the present invention further provides an operating method of the system for intelligently eliminating osteophytes of lower limb bone images by using the generated confrontation network model, which comprises the following steps:
s1, opening the box cover 2: an operator buckles the two hands in the buckling grooves 25 respectively and pulls the two sides with force, at the moment, the fixed block 22 and the moving block 23 both slide in the sliding groove 13, so that the two box covers 2 move backwards on the image acquisition box 1 until the moving block 23 contacts the inner wall of one side of the sliding groove 13, at the moment, the box covers 2 are continuously pushed, the moving block 23 slides towards one side of the fixed block 22 in the groove 21 and extrudes the spring 24 to contract, the box covers 2 continue to move backwards on the image acquisition box 1 until the box covers 2 are completely opened, and at the moment, image pictures can be placed on the glass plate 3 from the two box covers 2;
s2, closing the box cover 2: after two hands are released, the fixing block 22 is pushed out under the action of the elasticity of the spring 24, so that the fixing block 22 moves towards one side of the baffle plate 14 in the sliding groove 13 until the fixing block 22 abuts against the baffle plate 14, the two box covers 2 are overlapped, and the image acquisition box 1 is sealed;
s3, image acquisition: attaching an image picture with or without the lower limb bone osteophyte on the glass plate 3, switching on a power supply for the image sensor 48 to work, collecting the image picture through the image sensor 48, simultaneously switching on the power supply for the first stepping motor 44 to work, driving the first ball screw 43 to rotate by the first stepping motor 44, screwing the first screw nut 42 on the first ball screw 43 to perform linear motion, further driving the mounting block 41 to perform transverse motion in the guide rail groove 11, switching on the power supply for the second stepping motor 49 to work, driving the second ball screw 45 to rotate by the second stepping motor 49, screwing the second screw nut 46 on the second ball screw 45 to perform linear motion, further driving the linear motion of the image sensor 48, and completing the omnibearing collection of the image sensor 48;
s4, establishing a generator: inputting the image data without eliminating the lower limb osteophyte into a generator G;
s5, mapping sample: adopting a network structure of a multilayer perceptron, representing a guidable mapping G (z) by using parameters of MLP, and mapping an input space to a sample space;
s6, establishing a discriminator: inputting the image data for eliminating the lower limb osteophyte and the sample G (z) mapped by the generator G into a discriminator D;
s7, outputting a result: the final discrimination result of the discriminator D is expressed by "0" and "1" using the "Sigmoid function" transformation.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and the preferred embodiments of the present invention are described in the above embodiments and the description, and are not intended to limit the present invention. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. Utilize to generate system of antagonizing network model intelligence elimination lower limb bone image osteophyte, include image acquisition box (1) and install lid (2) at image acquisition box (1) top, its characterized in that: the image acquisition box (1) is provided with an open hollow box body at one end, an acquisition device (4) used for acquiring images is installed on the inner wall of the image acquisition box (1), the acquisition device (4) comprises a pair of installation blocks (41) with L-shaped cross sections, a first lead screw nut (42) is installed on the inner wall of each installation block (41), a first ball screw (43) is connected to the inner thread of each first lead screw nut (42), a first stepping motor (44) is installed at one end of each first ball screw (43), a second ball screw (45) is arranged between the two installation blocks (41), a second lead screw nut (46) is connected to the upper thread of each second ball screw (45), a bearing block (47) is installed on the outer wall of each second lead screw nut (46), and an image sensor (48) is installed at the top of each bearing block (47), one end of the second ball screw (45) is provided with a second stepping motor (49); the outer wall of the image acquisition box (1) is also provided with an image processing system (5), and the image processing system (5) comprises an image acquisition module for acquiring image data and an image training module for training the acquired image data; the image acquisition module comprises an image sensor module, an amplification filtering module, an A/D conversion module, a stepping motor module and a signal processing module;
the image acquisition module is used for calling an image sensor (48) to acquire images;
the amplifying and filtering module is used for carrying out pre-inversion, filtering and amplifying processing on the acquired image signals;
the A/D conversion module is used for carrying out digital processing on the image signal;
the stepping motor module is used for controlling the rotating speed of a first stepping motor (44) and a second stepping motor (49);
the signal processing module is used for carrying out fixed-point processing on the converted digital signal;
the image training module comprises a data collection module, an establishment generator module, a mapping sample module, an establishment discriminator module and an output result module;
the data collection module is used for collecting lower limb bone image data of osteophyte elimination and lower limb bone image data of non-osteophyte elimination;
the building generator module is used for inputting the lower limb bone image data without removing osteophyte into a generator G;
the mapping sample module is used for adopting a network structure of a multilayer perceptron, representing a derivable mapping G (z) by using parameters of MLP, and mapping an input space to a sample space;
the establishing discriminator module is used for inputting the lower limb bone image data for eliminating osteophyte and the sample G (z) mapped by the generator G into a discriminator D;
and the output result module is used for adopting the Sigmoid function to convert and representing the final judgment result of the discriminator D by 0 and 1.
2. The system for intelligently eliminating osteophytes of bone images of lower limbs by utilizing the generated countermeasure network model as claimed in claim 1, wherein: guide rail groove (11) have been seted up to the inner wall of image acquisition box (1), installation piece (41) with guide rail groove (11) sliding fit.
3. The system for intelligently eliminating osteophytes of bone images of lower limbs by utilizing the generated countermeasure network model as claimed in claim 1, wherein: glass board (3) are installed to image acquisition box (1) inner wall, the outer wall of glass board (3) is provided with inserted block (31), image acquisition box (1) is close to slot (12) have been seted up to glass board (3) one side, inserted block (31) with slot (12) are pegged graft the cooperation.
4. The system for intelligently eliminating osteophytes of bone images of lower limbs by utilizing the generated countermeasure network model as claimed in claim 1, wherein: the image acquisition box is characterized in that a pair of sliding grooves (13) are formed in two sides of the top of the image acquisition box (1), the two sliding grooves (13) are separated by a baffle (14), the box cover (2) is provided with a pair of sliding grooves, grooves (21) are formed in two sides of the bottom of the box cover (2), a fixed block (22) and a moving block (23) are arranged in each groove (21), the moving block (23) is in sliding fit with the grooves (21), and a spring (24) is arranged between the fixed block (22) and the moving block (23).
5. The system for intelligently eliminating osteophytes of bone images of lower limbs by utilizing the generated countermeasure network model according to claim 4, wherein: the fixed block (22) is close to one end of the baffle (14).
6. The system for intelligently eliminating osteophytes of bone images of lower limbs by utilizing the generated countermeasure network model according to claim 4, wherein: the top of the box cover (2) is provided with a buckling groove (25), and the buckling groove (25) is arc-shaped.
7. The system for intelligently eliminating osteophytes of lower limbs by utilizing the generated confrontation network model according to any one of claims 1 to 6, comprising the following operation steps:
s1, opening the box cover (2): an operator buckles the two hands in the buckling grooves (25) respectively and pulls the two sides with force, at the moment, the fixed block (22) and the moving block (23) slide in the sliding groove (13), so that the two box covers (2) move backwards on the image acquisition box (1) until the moving block (23) contacts the inner wall of one side of the sliding groove (13), at the moment, the box covers (2) are continuously pushed, the moving block (23) slides towards one side of the fixed block (22) in the groove (21), the extrusion spring (24) contracts, the box covers (2) continue to move backwards on the image acquisition box (1) until the box covers (2) are completely opened, and at the moment, image pictures can be placed on the glass plate (3) from the two box covers (2);
s2, closing the box cover (2): after two hands are released, the fixing block (22) is pushed out under the action of the elasticity of the spring (24), so that the fixing block (22) moves towards one side of the baffle (14) in the sliding groove (13) until the fixing block (22) abuts against the baffle (14), the two box covers (2) are overlapped, and the image acquisition box (1) is sealed;
s3, image acquisition: attaching an image picture of which the lower limb bone osteophyte is eliminated or an image picture of which the lower limb bone osteophyte is not eliminated on a glass plate (3), switching on a power supply of an image sensor (48) to work, collecting the image picture through the image sensor (48), simultaneously switching on the power supply of a first stepping motor (44) to work, driving a first ball screw (43) to rotate by the first stepping motor (44), and carrying out linear motion on the first ball screw (43) by screwing a first screw nut (42), further driving an installation block (41) to transversely move in a guide rail groove (11), switching on the power supply of a second stepping motor (49) to work, driving a second ball screw (45) to rotate by the second stepping motor (49), and carrying out linear motion on the second ball screw (45) by screwing a second screw nut (46), further driving the linear motion of the image sensor (48), completing the omnibearing acquisition of the image sensor (48);
s4, establishing a generator: inputting the image data without eliminating the lower limb osteophyte into a generator G;
s5, mapping sample: adopting a network structure of a multilayer perceptron, representing a guidable mapping G (z) by using parameters of MLP, and mapping an input space to a sample space;
s6, establishing a discriminator: inputting the image data for eliminating the lower limb osteophyte and the sample G (z) mapped by the generator G into a discriminator D;
s7, outputting a result: the final discrimination result of the discriminator D is expressed by "0" and "1" using the "Sigmoid function" transformation.
CN201910307088.4A 2019-04-17 2019-04-17 System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model Active CN109984841B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910307088.4A CN109984841B (en) 2019-04-17 2019-04-17 System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910307088.4A CN109984841B (en) 2019-04-17 2019-04-17 System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model

Publications (2)

Publication Number Publication Date
CN109984841A CN109984841A (en) 2019-07-09
CN109984841B true CN109984841B (en) 2021-12-17

Family

ID=67133863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910307088.4A Active CN109984841B (en) 2019-04-17 2019-04-17 System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model

Country Status (1)

Country Link
CN (1) CN109984841B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004073297A1 (en) * 2003-02-12 2004-08-26 Paul Huang Dual image sensor photograph technique and device
JP2010004090A (en) * 2008-06-18 2010-01-07 Ricoh Co Ltd Imaging apparatus
CN102269713A (en) * 2011-08-02 2011-12-07 武汉科技大学 Surface image acquiring device of continuous casting mold copper plate
CN106434330A (en) * 2016-10-09 2017-02-22 戴敬 Absolute quantification type digital nucleic acid analytic system based on efficient liquid drop microreactor
CN106841061A (en) * 2016-12-28 2017-06-13 济南格利特科技有限公司 A kind of high-resolution blood cell analyzer and analysis method
CN107144242A (en) * 2017-06-01 2017-09-08 昆山科森科技股份有限公司 Fingerprint module appearance delection device based on ccd image sensor
CN108198154A (en) * 2018-03-19 2018-06-22 中山大学 Image de-noising method, device, equipment and storage medium
CN108229576A (en) * 2018-01-23 2018-06-29 北京航空航天大学 Across the multiplying power pathological image feature learning method of one kind
CN109191414A (en) * 2018-08-21 2019-01-11 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN109493308A (en) * 2018-11-14 2019-03-19 吉林大学 The medical image synthesis and classification method for generating confrontation network are differentiated based on condition more
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10692607B2 (en) * 2015-08-18 2020-06-23 Case Western Reserve University Treatment planning and evaluation for rectal cancer via image analytics

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004073297A1 (en) * 2003-02-12 2004-08-26 Paul Huang Dual image sensor photograph technique and device
JP2010004090A (en) * 2008-06-18 2010-01-07 Ricoh Co Ltd Imaging apparatus
CN102269713A (en) * 2011-08-02 2011-12-07 武汉科技大学 Surface image acquiring device of continuous casting mold copper plate
CN106434330A (en) * 2016-10-09 2017-02-22 戴敬 Absolute quantification type digital nucleic acid analytic system based on efficient liquid drop microreactor
CN106841061A (en) * 2016-12-28 2017-06-13 济南格利特科技有限公司 A kind of high-resolution blood cell analyzer and analysis method
CN107144242A (en) * 2017-06-01 2017-09-08 昆山科森科技股份有限公司 Fingerprint module appearance delection device based on ccd image sensor
CN108229576A (en) * 2018-01-23 2018-06-29 北京航空航天大学 Across the multiplying power pathological image feature learning method of one kind
CN108198154A (en) * 2018-03-19 2018-06-22 中山大学 Image de-noising method, device, equipment and storage medium
CN109191414A (en) * 2018-08-21 2019-01-11 北京旷视科技有限公司 A kind of image processing method, device, electronic equipment and storage medium
CN109493308A (en) * 2018-11-14 2019-03-19 吉林大学 The medical image synthesis and classification method for generating confrontation network are differentiated based on condition more
CN109637634A (en) * 2018-12-11 2019-04-16 厦门大学 A kind of medical image synthetic method based on generation confrontation network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于生成对抗网络的乳腺癌病理图像可疑区域标记;刘海东、杨小渝、朱林忠;《科研信息化技术与应用》;20170630;第8卷(第6期);第52-64页 *

Also Published As

Publication number Publication date
CN109984841A (en) 2019-07-09

Similar Documents

Publication Publication Date Title
Ali et al. Detection and classification of dental caries in x-ray images using deep neural networks
EP0444417A2 (en) Exposure control for video camera
KR101001497B1 (en) Capsule-type medical device, control device for medical use, image processing device for medical use, and recording medium for storing program
US5090042A (en) Videofluoroscopy system for in vivo motion analysis
CN101674769A (en) Capsule-type endoscope capable of controlling frame rate of image
CN107050774A (en) A kind of body-building action error correction system and method based on action collection
Bhatia et al. Real-time identification of operating room state from video
CN111768497B (en) Three-dimensional reconstruction method, device and system of head dynamic virtual model
Kiritani et al. High-speed digital image analysis of vocal cord vibration in diplophonia
CN103986897B (en) Medical image system and the method for obtaining medical image
CN108937905B (en) Non-contact heart rate detection method based on signal fitting
CN115082448B (en) Intestinal tract cleanliness scoring method and device and computer equipment
CN109984841B (en) System for intelligently eliminating osteophytes of lower limb bone images by utilizing generated confrontation network model
US8650027B2 (en) Electrolaryngeal speech reconstruction method and system thereof
Ilić et al. Gender estimation from panoramic dental X-ray images using deep convolutional networks
Michel et al. Functional anatomy and kinematics of the oral jaw system during terrestrial feeding in Periophthalmus barbarus
ATE461659T1 (en) M-MODE METHOD FOR TRACKING TISSUE MOVEMENTS IN IMAGE REPRESENTATIONS
Lohscheller et al. Quantitative investigation of the vibration pattern of the substitute voice generator
CN106023057A (en) Control processing system and imaging method for subcutaneous vein imaging apparatus
CN117350979A (en) Arbitrary focus segmentation and tracking system based on medical ultrasonic image
JP2022027304A (en) Swallowing function evaluation/training method and system therefor, using time series data prediction
CN102160773B (en) In-vitro magnetic control sampling capsule system based on digital image guidance
CN215833947U (en) Needle body movement track detection device and system
CN113807323B (en) Accurate hand function evaluation system and method based on image recognition
CN111588342A (en) Intelligent auxiliary system for bronchofiberscope intubation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant