CN115015888A - Laser three-dimensional dynamic scene simulation system - Google Patents

Laser three-dimensional dynamic scene simulation system Download PDF

Info

Publication number
CN115015888A
CN115015888A CN202210535240.6A CN202210535240A CN115015888A CN 115015888 A CN115015888 A CN 115015888A CN 202210535240 A CN202210535240 A CN 202210535240A CN 115015888 A CN115015888 A CN 115015888A
Authority
CN
China
Prior art keywords
laser
image
simulation
module
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210535240.6A
Other languages
Chinese (zh)
Other versions
CN115015888B (en
Inventor
邵冬亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Fangju Technology Development Co ltd
Original Assignee
Harbin Fangju Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Fangju Technology Development Co ltd filed Critical Harbin Fangju Technology Development Co ltd
Priority to CN202210535240.6A priority Critical patent/CN115015888B/en
Publication of CN115015888A publication Critical patent/CN115015888A/en
Application granted granted Critical
Publication of CN115015888B publication Critical patent/CN115015888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/32Devices for testing or checking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A laser three-dimensional dynamic scene simulation system solves the problem that a real scene and a real target need to be set up when the existing laser radar of a seeker is tested, and belongs to the technical field of laser radar detection. The invention comprises the following steps: the modeling system is used for modeling a three-dimensional scene according to a detected laser radar detection target and a background, performing simulation to obtain a three-dimensional scene target laser reflectivity simulation image, slicing the image according to a set detection distance resolution gate, combining the sliced image with corresponding detection distance information, storing the sliced image in an image RGB channel according to a time sequence, and sending the combined image to the laser echo generation system; the laser echo generating system decodes the received image data to obtain a time sequence image, extracts graphic information and detection distance information respectively according to a decoding time sequence, generates a laser echo corresponding to the graphic information according to the detection distance information under the control of a trigger signal of the detected laser radar seeker, and sends the laser echo to the detected laser radar.

Description

Laser three-dimensional dynamic scene simulation system
Technical Field
The invention relates to a laser three-dimensional dynamic scene simulation system, and belongs to the technical field of laser radar detection.
Background
Laser radar is a short for laser detection and ranging system. Lidar is a product of a combination of laser technology and lidar technology. Lidar is a three-dimensional laser scanning system. The working principle is that relevant information of the measured physics, such as parameters of target distance, azimuth, height, attitude, shape and the like, is calculated and described by continuously transmitting detection signals (laser beams) to surrounding targets and receiving returned signals (target echoes), so as to achieve the aim of dynamic 3D scanning. In the prior art, when a laser radar of a guided weapon is tested, a target is put into a scene, such as a tank and a vehicle in a desert, and the tank in the desert is detected by the laser radar. Different scenes and targets need to be provided according to different technical index tests when the existing laser radar is tested, and then the detection effect of the laser radar is better judged.
Disclosure of Invention
The invention provides a laser three-dimensional dynamic scene simulation system, which aims at the problem that a real scene and a real target need to be set up when the existing laser radar of a guided weapon is tested.
The invention discloses a laser three-dimensional dynamic scene simulation system, which comprises a modeling system and a laser echo generating system;
the modeling system is used for modeling a three-dimensional scene and simulating the laser reflectivity of the three-dimensional scene according to a detected laser radar detection target and a background to obtain a three-dimensional scene target laser reflectivity simulation image, slicing the obtained three-dimensional scene target laser reflectivity simulation image according to a set detection distance resolution gate, combining the sliced image with corresponding detection distance information, storing the sliced image in an RGB (red, green and blue) channel of the image according to a time sequence, and sending the combined image to the laser echo generation system;
and the laser echo generating system is used for decoding the received image data to obtain a time sequence image, respectively extracting graphic information and detection distance information according to the decoding time sequence, generating a laser echo corresponding to the graphic information according to the detection distance information under the control of a trigger signal of the detected laser radar seeker, and sending the laser echo to the detected laser radar.
Preferably, the modeling system comprises a three-dimensional modeling module, a control interaction module, a simulation module and a data generation module;
the control interaction module is used for inputting scene simulation parameters and laser simulation parameters; the scene simulation parameters comprise target attitude, position and target speed;
the system comprises a three-dimensional modeling module, a transmission medium module and a data processing module, wherein the three-dimensional modeling module is used for establishing a three-dimensional geometric model, a target surface characteristic model and a transmission medium model of a target and a background according to a detected laser radar detection target and the background, and the target surface characteristic model is used for simulating the laser reflectivity of the target under the control of laser simulation parameters according to the surface texture and the material of the target under the condition of natural illumination; the transmission medium model is used for determining the attenuation degree of laser according to the environmental background radiation and the atmospheric environment change and simulating the laser reflectivity of the background;
the simulation module is used for carrying out three-dimensional scene simulation according to the scene simulation parameters and the established model to obtain a three-dimensional scene simulation image;
the data generation module is used for determining the laser reflectivity of the target and the background according to the laser simulation parameters by combining the established surface characteristic model and the established transmission medium model, dividing the laser reflectivity into gray levels, performing gray level simulation on the three-dimensional scene target and acquiring a three-dimensional scene reflectivity simulation image; the method comprises the steps of binarizing a three-dimensional scene reflectivity simulation image, performing two-dimensional slicing on the binarized image according to a detection distance gate emitting laser, grouping the sliced images, storing the sliced images in an RGB channel of the image according to time sequence, storing detection distance information of the sliced images in other channels of the RGB channel according to time sequence to complete image reconstruction, taking the storage time sequence of the sliced images and the detection distance information as a reconstruction protocol, and sending reconstructed image data to a laser echo generation system.
Preferably, in the data generation module, dividing the reflectivity into gray levels, performing gray simulation on a three-dimensional scene, and acquiring a three-dimensional scene reflectivity simulation image, the method includes:
dividing the gray scale into 65 levels, wherein the gray scale is 0 under the condition of no target and no background, and the gray scale is 64 when the reflectivity is the highest as the reflectivity is increased and the gray scale is higher;
if the gray scale of one pixel is a, the single pixel is expanded into 8 multiplied by 8 pixels, a pixels are arranged in a rotating mode from the center, the reflectivity simulation of the single pixel is completed, and a is 0 to 64.
Preferably, the laser echo generating system comprises a graph distance separating module, a synchronous delay module, a DMD driving system, a laser and a collimating optical system;
the image distance separation module is used for receiving the reconstructed image data, decoding the received image data to obtain a time sequence image of the image data, respectively extracting image information and detection distance information from a data stream according to a storage time sequence in a reconstruction protocol, sending the image information to the digital DMD driving system, and sending the detection distance information to the synchronous delay module;
the synchronous delay module is used for receiving a trigger signal of the detected laser radar seeker, calculating the switch delay of the digital DMD driving system and the pulse delay of the laser according to the detection distance information of the trigger signal and the graph distance separation module, and respectively sending the switch delay of the digital DMD driving system and the pulse delay of the laser to the digital DMD driving system and the laser;
the laser emitted by the laser is incident to a micromirror DMD array of the DMD driving system;
the micromirror DMD array images according to the received graphic information and laser incident from the laser, and emits laser to the collimating optical system according to the imaging, and the collimating optical system emits the laser as a laser echo signal to the laser radar to be detected.
Preferably, the reconstructed image data is transmitted through a DVI transmission physical protocol, and the image distance separation module comprises a hardware decoding module and a software decoding module, wherein the hardware decoding module is performed by adopting a decoding chip with the model of THC63DV16 to decode RGB data and corresponding synchronization signals necessary for video transmission;
and the software decoding is completed by adopting a high-speed FPGA as a main control chip and a VHDL hardware description language, and the image information and the detection distance information of the data stream are separated according to the storage time sequence in the reconstruction protocol.
Preferably, the acquiring of the scene simulation parameters includes self-setting or real-time acquiring through a network communication mode.
Preferably, the system further comprises a control information monitoring module;
and the control information monitoring module is used for monitoring and displaying the interactive control log, the scene simulation log and the image processing log in the simulation process.
The invention has the advantages that the three-dimensional dynamic laser target and scene can be generated, the three-dimensional dynamic laser target and scene is output in a parallel light form through the optical system and is supplied to the tested product for receiving, and the function and the performance of the laser radar product are tested. The invention can simulate the laser three-dimensional imaging laser radar to detect the usable target distance echo signal; corresponding target and scene echo signals can be generated according to requirements; the system can provide various targets and backgrounds, does not need actual tanks or vehicles, does not need actual backgrounds, and can carry out various index tests in a laboratory.
Drawings
FIG. 1 is a schematic diagram of the principles of the present invention;
FIG. 2 is a three-dimensional geometric model in an embodiment of the invention;
FIG. 3 is a schematic diagram of the effect of a three-dimensional scene simulation image according to the present invention;
FIG. 4 is a schematic diagram of the effect of a three-dimensional scene reflectivity simulation image according to the present invention;
FIG. 5 illustrates the gray level binarization of the pixels in accordance with the present invention;
FIG. 6 is a timing diagram of the range strobe of the present invention;
FIG. 7 is a sequence of slice imaging simulation images at a view distance of 300 meters in accordance with the present invention;
FIG. 8 is a schematic diagram illustrating the image reconstruction principles of the present invention;
FIG. 9 is a schematic diagram of the electrical system of the present invention;
FIG. 10 is a hard decoding timing diagram;
fig. 11 is a diagram of a soft decoding principle.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive efforts based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The laser three-dimensional dynamic scene simulation system comprises a modeling system and a laser echo generation system;
the modeling system is used for modeling a three-dimensional scene and simulating the laser reflectivity of the three-dimensional scene according to a detected laser radar detection target and a background to obtain a three-dimensional scene target laser reflectivity simulation image, slicing the obtained three-dimensional scene target laser reflectivity simulation image according to a set detection distance resolution gate, combining the sliced image with corresponding detection distance information, storing the sliced image in an RGB (red, green and blue) channel of the image according to a time sequence, and sending the combined sliced image to the laser echo generation system;
and the laser echo generating system is used for decoding the received image data to obtain a time sequence image, separating image information from detection distance information according to a decoding time sequence, generating a laser echo corresponding to the image information according to the detection distance information under the control of a trigger signal of the detected laser radar seeker, and sending the laser echo to the detected laser radar.
The modeling system comprises two parts, one part is used for modeling a three-dimensional scene of a detected laser radar detection target and a background, the other part is used for simulating the laser reflectivity of the three-dimensional scene, image information and detection distance information are generated after simulation and sent to a laser echo generating system, the laser echo generating system converts the image information and the detection distance information into laser echoes and sends the laser echoes to the laser radar, and the laser radar is tested.
In a preferred embodiment, the modeling system of the invention comprises a three-dimensional modeling module, a control interaction module, a simulation module and a data generation module;
the control interaction module is used for inputting scene simulation parameters and laser simulation parameters;
the scene simulation parameters comprise target attitude, position and target speed;
the system comprises a three-dimensional modeling module, a transmission medium module and a data processing module, wherein the three-dimensional modeling module is used for establishing a three-dimensional geometric model, a target surface characteristic model and a transmission medium model of a target and a background according to a detected laser radar detection target and the background, and the target surface characteristic model is used for simulating the laser reflectivity of the target under the control of laser simulation parameters according to the surface texture and the material of the target under the condition of natural illumination; the transmission medium model is used for determining the attenuation degree of laser according to the environmental background radiation and the atmospheric environment change, and simulating the laser reflectivity of the background;
the three-dimensional modeling module of the present embodiment mainly includes three parts, i.e., geometric modeling of an object and a background, surface property modeling, and transmission medium modeling. Wherein, the geometric modeling mainly aims at a typical target and a background, and a three-dimensional modeling software (such as 3DMax and the like) is used for generating a dynamic three-dimensional scene geometric model; the surface characteristic modeling is mainly used for establishing a model according to textures and materials of the actual surface of the target, so that the target can show the reflectivity consistent with the real situation according to the control and the change of natural illumination conditions and laser emission condition parameters; the transmission medium modeling is mainly used for establishing a model according to the environmental background radiation and the atmospheric environment so as to reflect the attenuation degree of the laser and simulate the laser reflectivity of the background.
The simulation module is used for carrying out three-dimensional scene simulation according to the scene simulation parameters and the established model to obtain a colorful three-dimensional scene simulation image;
in the embodiment, the three-dimensional visual simulation of the original three-dimensional scene, the target and the background laser detection is performed according to the established model, which specifically comprises the following steps: and loading geometric models and texture materials of the target and the background, and rendering a vivid visual scene containing atmosphere, illumination and three-dimensional models based on a three-dimensional graphic engine according to the established transmission medium model.
The data generation module is used for determining the laser reflectivity of the target and the background according to the laser simulation parameters by combining the established surface characteristic model and the established transmission medium model, dividing the laser reflectivity into gray levels, performing gray level simulation on the three-dimensional scene target and acquiring a three-dimensional scene reflectivity simulation image; the method comprises the steps of binarizing a three-dimensional scene reflectivity simulation image, performing two-dimensional slicing on the binarized image according to a detection range gate emitting laser, grouping the sliced images, storing the sliced images in an RGB channel of the image according to time sequence, storing detection range information of the sliced images in other channels of the RGB channel according to time sequence, completing image reconstruction, taking the storage time sequence of the sliced images and the detection range information as a reconstruction protocol, and sending reconstructed image data to a laser echo generation system. According to the embodiment, the target and background reflectivity under the set condition is calculated according to the established target surface characteristic model and the established transmission medium model aiming at the laser simulation parameters set by the control interaction module, and the slice image subjected to gray level binarization is simulated. Combining scene slice data with detection distance data, sending the reconstructed image into a laser echo generating system, and generating data information required by an echo generator in real time; after a three-dimensional simulation scene is generated, discrete space slicing is performed on an image generated by simulation according to a detection distance gate in a two-dimensional slicing mode. For laser imaging, echo time corresponding to different spaces is different, and the two-dimensional slice serves as echo image position indication and provides echo profile information for subsequent echo generation equipment. The slicing only carries out slicing on the visible surface of the scene, namely, the slicing operation is not carried out on the invisible surface of the interior and the target or the background, so as to ensure that the shielded scene part cannot generate an echo signal. After two-dimensional slicing, the images are single gray echo images in different spaces, and in order to improve the image transmission efficiency, the slicing images are grouped and stored in RGB channels of the images according to time sequence by combining the computer image RGB principle, so that the image reconstruction is completed.
The image output of the embodiment is respectively a display simulation result and output image data, wherein the display content of the simulation result comprises the display of an original color three-dimensional scene simulation result, the display of a reflectivity modeling simulation result, the display of a reconstructed image result and the like; and the output image data part sends the reconstructed image to echo generating equipment through a DVI interface according to different simulation modes.
In a preferred embodiment, the obtaining of the scene simulation parameters in the present embodiment includes self-setting or real-time obtaining through a network communication mode.
After the model is established, a simulation mode is selected through an interactive interface, a static simulation mode is realized by setting interactive control parameters, and a dynamic simulation mode is realized by reading network data in real time; and performing static or dynamic three-dimensional scene simulation according to the selected simulation mode.
The simulation function of the embodiment mainly comprises a static simulation part and a dynamic simulation part, and can be distinguished by the acquisition mode of scene simulation parameters. Scene simulation parameters can be acquired through interactive interface setting and network communication, wherein a single-frame image in a certain state is generated when the interactive interface setting mode is adopted, namely static simulation; real-time information such as target and background postures, shot-eye visual angles, shot-eye distances and the like is acquired by adopting a network communication mode (for example, information in a memory card is received by an optical fiber reflection), and a simulation scene and a serialized image are dynamically generated, namely dynamic simulation is carried out. Wherein the static simulation mode output is a single frame image, and the dynamic simulation mode output is a sequence image.
In a preferred embodiment, dividing the reflectivity into gray levels and performing gray simulation on a three-dimensional scene to obtain a three-dimensional scene reflectivity simulation image includes:
dividing the gray scale into 65 levels, wherein the gray scale is 0 under the condition of no target and no background, and the gray scale is 64 when the reflectivity is the highest as the reflectivity is increased and the gray scale is higher; if the gray scale of one pixel is a, the single pixel is expanded into 8 multiplied by 8 pixels, a pixels are arranged in a rotating mode from the center, the reflectivity simulation of the single pixel is completed, and a is 0 to 64.
The specific embodiment is as follows:
one-dimensional modeling module for modeling target and background
The present embodiment is directed to the following scenarios: the target is a tank and the background is a desert;
and carrying out geometric modeling on the tank target and the desert background. And 3D Max three-dimensional modeling software is used for establishing a three-dimensional triangular mesh model of the target and the background, and simulating the simulation target in a real scene. The method comprises the following steps of (1) performing geometric simulation modeling on a target, establishing a three-dimensional mesh simulation model of the target in a virtual three-dimensional space by means of three-dimensional modeling software, describing the geometric outline and the spatial position of the target by adopting combination attributes such as vertexes, edges, patches and the like, and simulating an imaging target in a real scene: the vertexes are connected pairwise to form edges, the edges connected end to end form patches, all the patches are combined into a three-dimensional mesh model of the target, and the vertexes and the patches are the two most important attributes in the geometric model of the target. An effect diagram of the tank after geometric modeling by adopting three-dimensional modeling software is shown in FIG. 2;
the general target surface is composed of a plane surface and a curved surface, and the present embodiment performs simplified analysis with a cylindrical surface as a typical curved surface. For the side surface of the cylinder, the radius of the cylinder is set as r C The height of the crystal is H,
Figure BDA0003647636320000061
is the normal vector outside the cylindrical surface,
Figure BDA0003647636320000062
representing the incident direction vector. Taking the central point of the bottom surface of the cylinder as the origin of coordinates, the equation of the cylinder surface is as follows:
Figure BDA0003647636320000071
in the formula:
Figure BDA0003647636320000078
feature vector F of visible target surface element attribute x,y Describing, calculating the bidirectional reflection distribution function value of the target surface element after defining the target material(Bi-directional reflection Distribution Function, BRDF), for imaging simulation, the distance of a target surface element corresponding to an observation plane pixel and the radiance of the target surface element need to be known, and the target surface element eigenvector can be calculated.
The embodiment adopts the SUN model to analyze the laser scattering characteristics of the space target. The expression is as follows:
Figure BDA0003647636320000072
wherein
Figure BDA0003647636320000073
Figure BDA0003647636320000074
Figure BDA0003647636320000075
In the formula: theta i Is an incident zenith angle; theta r Observing a zenith angle;
Figure BDA0003647636320000076
is the incident azimuth angle;
Figure BDA0003647636320000077
to observe the azimuth. Sigma is surface roughness root mean square; τ is the surface autocorrelation length, both of which characterize the degree of smoothness of the target surface, with smaller σ, larger τ providing smoother target, and for an ideal smooth surface, s → ∞. F (theta) i λ) is the Fresnel reflection Coefficient (Fresnel Coefficient) and is the angle of incidence θ i The function of (d) varies with the incident angle, is determined by the surface material properties, and is represented by the refractive indices of the incident medium and the transmission medium.
Secondly, the simulation module obtains a three-dimensional scene simulation image: and loading geometric models and texture materials of the target and the background, and rendering a vivid visual scene containing atmosphere, illumination and three-dimensional models based on a three-dimensional graphic engine according to the established transmission medium model. The three-dimensional scene visualization simulation result is displayed in a three-dimensional scene simulation display area of a main screen, the real resolution is 128 multiplied by 96, the simulation effect is visually displayed, proper picture amplification is carried out, and the rendering effect is shown in fig. 3;
thirdly, the data generation module acquires a three-dimensional scene reflectivity simulation image:
according to the embodiment, a related algorithm is designed according to the established surface characteristics and the transmission medium model aiming at the laser simulation parameters set by a user, the target and background reflectivity under the set condition is calculated, and finally, a slice image subjected to gray level binarization is simulated in a slice image display area of a main screen. The image display resolution is 1024 × 768, and the display effect is shown in fig. 4;
when a three-dimensional simulation scene is constructed, pixel binarization is carried out on a laser detection simulation three-dimensional gray level image output by the scene, a single pixel gray level value is converted into 8 multiplied by 8 pixel points, and the resolution of the image subjected to gray level binarization is converted from 128 multiplied by 96 to 1024 multiplied by 768. Before binarization, the gray scale is divided into 65 levels in total, the gray scale is 0 under the condition of no target and no background, and the gray scale is 64 when the reflectivity is highest. Assuming that the gray scale of one pixel is 15, the single pixel is enlarged to 8 × 8 pixels, and 15 pixels are arranged by rotation from the center, thereby realizing the gray scale binarization of the pixel, as shown in fig. 5;
image slicing refers to generating a two-dimensional image of a point tangent plane of a target or background visible surface in a three-dimensional simulation scene. According to the laser imaging principle, echo profile information is provided for a laser echo generating system through extracting slices by setting a detection range gate and a detection range resolution parameter. The image slice mainly simulates a range-gated slice, wherein the range-gated technology utilizes a pulse laser and a gating imager to enable the gating time of the gating imager to be consistent with the arrival time of an echo pulse of the laser, so that a target at a specific distance is imaged. Pulsed laser and gated imagingThe devices are synchronized by a control circuit at t 0 At that time, the laser pulses, and the gated imager is turned off. At the location (t) where the laser pulse travels to the target and reflects back to the gated imager 1 Time) before the gated imager remains off, so that t 0 And t 1 The backward scattered light generated in the transmission process of the pulse laser can not enter the gating imager to form noise. When the reflected pulse laser reaches the gated imager (t) 1 Time of day), the gated imager is turned on, receives the returned pulsed laser and performs imaging. The target image thus formed is primarily related to the reflected light of the target during the range gate time.
The acquisition process of the laser two-dimensional intensity image comprises the following steps: the laser emits a light pulse which is reflected back after irradiating a target, only an echo at a certain specific moment can be received in the process, and the MCP gate is closed at other moments and cannot form images. And imaging the next echo pulse signal by certain time delay, and repeating the steps to obtain a series of two-dimensional slice images. The specific implementation process is shown in fig. 6.
The first rising edge of the synchronous trigger signal in FIG. 6 begins triggering the laser, passing through t 0 While delaying this pulse by t 0 The time of + t is loaded on the MCP gate, so that the laser radar system detects the detection distance R 1 Laser echo detection at ct/2 position to obtain intensity image of the first slice, and the corresponding detection distance is R 1 . After that, when the next clock pulse comes, the laser trigger pulse also arrives at the trigger laser at the same time, at this time, the pulse signal is loaded on the MCP gate after being delayed by t + delta t, and the laser radar system will detect the distance R 2 Imaging the target at c (t + Δ t)/2. This process is then repeated, which results in a series of two-dimensional images. This process continues until the set detection distance R is reached n And (6) ending. The number of slice images obtained was:
Figure BDA0003647636320000091
the resulting two-dimensional intensity images are used to synthesize a 3D image. Firstly, arranging and numbering the obtained two-dimensional intensity slice images according to the obtained sequence, wherein the detection distance corresponding to the ith two-dimensional image is as follows:
Figure BDA0003647636320000092
in the embodiment, the visual distance is set to 300 meters, the detection distance gate is set to 3 meters, and the generated slice imaging simulation image sequence is shown in fig. 7;
and reorganizing and generating the single gray image sequence and the detection distance information after the two-dimensional slicing through image reconstruction, and transmitting the single gray image sequence and the detection distance information to an echo generating system so as to improve the transmission efficiency. By combining the computer image RGB principle, a new group of images is formed by 10 groups of slices, and the 10 groups of images are respectively placed on G1-G0 channels and R7-R0 channels of the images according to the sequence, namely the channels { G1/G0/R7/R6/R5/R4/R3/R2/R1/R0 }. The slice detection distance information corresponding to the simulation T0 moment is superposed on G7-G2 and B7-B0 channels, namely the channels { G7/G6/G5/G4/G3/G2/B7/B6/B5/B4/B3/B2/B1/B0} to complete image reconstruction, as shown in FIG. 8;
fourthly, controlling the interaction module to perform network communication interaction and simulation control:
when the user selects the dynamic simulation mode, the embodiment receives real-time simulation data such as the position, the posture, the visual angle, the speed and the like of the shot eyes transmitted in the optical fiber reflection memory card, simulates the approach process of the shot eyes, controls the modeling target and the scene posture in real time, and simultaneously generates a three-dimensional dynamic sequence image for generating data required by a laser echo system.
In the embodiment, the simulation process is controlled by laser parameter setting and scene control setting, wherein the laser parameters comprise a laser emission angle, a detection distance gate, a detection distance resolution and the like; the scene control parameters comprise target position posture, bullet position, bullet posture, bullet speed, bullet field angle and the like. In the dynamic simulation mode, the scene control parameters are acquired through network communication, and the static simulation mode is set by a user.
The control information monitoring mainly monitors and displays an interactive control log, a scene simulation log and an image processing log in the simulation process, and is convenient for a user to monitor and manage the simulation process of the modeling software.
Laser echo generation system of the present embodiment
The device comprises a graph distance separation module, a synchronous delay module, a DMD driving system, a laser and a collimation optical system;
the image distance separation module is used for receiving the reconstructed image data, decoding the received image data to obtain a time sequence image of the image data, respectively extracting image information and detection distance information from a data stream according to a storage time sequence in a reconstruction protocol, sending the image information to the digital DMD driving system, and sending the detection distance information to the synchronous delay module;
the synchronous delay module is used for receiving a trigger signal of the tested laser radar seeker, calculating the switch delay of the digital DMD driving system and the pulse delay of the laser according to the detection distance information of the trigger signal and the graph distance separation module, and respectively sending the switch delay of the digital DMD driving system and the pulse delay of the laser to the digital DMD driving system and the laser;
the laser emitted by the laser is incident to a micromirror DMD array of the DMD driving system;
the micromirror DMD array images according to the received graphic information and laser incident from the laser, and emits laser to the collimating optical system according to the imaging, and the collimating optical system emits the laser as a laser echo signal to the laser radar to be detected.
The modeling system of the embodiment models the laser echo signal by using modeling software, modeling information is sent to the laser echo generating system through a special transmission protocol, the laser echo generating system generates laser echoes of a target and a background with a specific distance under the control of a synchronous signal of a tested product, and the laser echoes are sent to the entrance pupil of the tested product through the collimating optical system. The schematic diagram is shown in FIG. 1;
the embodiment also comprises an electrical system, and the main function is to provide power supply and other corresponding control functions for the laser echo generating system. As shown in fig. 9, the electrical system is supported by a vertical electric control cabinet, and the graphic workstation and the display are integrally installed inside the electric control cabinet. The electric control cabinet is internally integrated with components such as a switch, an indicator light, a noise filter, a relay, a circuit breaker, a direct-current power supply and the like; the electric control cabinet is connected with the laser echo generating system host through a special communication and power supply cable. Because the whole system works at high frequency, the related image processing part is completed by adopting a special circuit board, and in order to shorten the transmission distance, components such as a distance separating circuit board, a synchronous delay circuit board, a DMD driving circuit board, a laser and the like are integrated in the host of the laser echo generating system.
The image distance separation module firstly completes DVI hardware protocol decoding, and after hardware decoding, software decoding is carried out through a special transmission protocol.
DVI is based on TMDS technology to transmit digital signals. The TMDS applies an advanced encoding algorithm to encode 8-bit data (each base color signal in R, G, B) into 10-bit data (including line-field synchronization information, clock information, data DE, error correction, etc.) through minimum conversion, and after DC balance, transmits the data using differential signals, which has better electromagnetic compatibility than LVDS and TTL. The DVI transmission physical protocol adopts 3 pairs of differential data and 1 pair of differential clocks, and the highest clock frequency can reach 145 Mhz; the data throughput meets the design requirements. Therefore, the DVI hardware protocol is adopted as the physical medium for transmitting the image and the detection distance information in the embodiment. The protocol decoding module mainly functions to decode the special data information transmitted by the graphics workstation through the DVI hardware protocol. The protocol decoding adopts a special decoding chip THC63DV 161.
The decoding chip can accurately decode the high-speed DVI signal under the control of the peripheral circuit to decode corresponding synchronous signals necessary for image chromaticity and video transmission.
After decoding, the data generates frame sync, line sync, RGB data. According to a modeling software image reconstruction protocol, hardware decoding is carried out on data information, and a data sequence diagram after decoding is shown in FIG. 10;
after hardware decoding, according to decoding time sequence, the data stream is subjected to graphic and detection distance software separation decoding. The software decoding is to decode the data of the modeling software, decode the geometric figure data and the detection distance data, and send the geometric figure data to the DMD driving module and the synchronous delay system respectively. The system adopts a high-speed FPGA as a main control chip, the model of the chip is xilinx V5 series, the internal clock frequency of the FPGA can reach 1GHz, and sufficient logic units and peripheral pins are arranged. 4 PLL clock rings are provided, which can satisfy high-speed graphics processing. And completing the soft decoding of the graph and the detection distance by adopting a VHDL hardware description language. The graph is a single gray image, a new group of images is formed by 10 groups of slices, and the 10 groups of images are respectively placed on { G1/G0/R7/R6/R5/R4/R3/R2/R1/R0} according to the sequence. Slice detection distance information corresponding to the simulation T0 moment is superposed on { B7/B6/B5/B4/B3/B2/B1/B0/G7/G6/G5/G4/G3/G2}, so that during decoding, G7/G6/G5/G4/G3/G2/B7/B6/B5/B4/B3/B2/B1/B0 are collected, a group of detection distance parameters are generated and stored in an L0 register, wherein L0 is 14bit data and corresponds to scene information at the T0 moment; respectively collecting G1/G0/R7/R6/R5/R4/R3/R2/R1/R0 data of one frame, generating 10 groups of binary graphic profile parameters, and respectively storing the parameters in P0-P9 registers; software separation of the pattern from the probe distance is accomplished. Based on the parallelism working principle of the FPGA, the graph distance separation can complete the separation operation in 2 system clocks, so that the original data transmission frame frequency is not changed while the minimum system delay is ensured. The soft decoding schematic is shown in fig. 11;
the DMD driving system comprises a micromirror DMD array, a DMD driving module, a DDR2 internal memory, a power supply component and the like. DMD theory of operation: digital Micro-mirror arrays (DMD) are silicon-based Micro-electro Mechanical systems (MEMS) that are reflective light spatial modulators developed by texas instruments, usa. It is composed of thousands of aluminum alloy micro-reflectors. The high reflectivity micro-mechanical turning mirror is mounted on a standard COMS memory chip. The micromirror pixel unit consists of hinge support, thin torsion hinge, thick mirror element, address electrode and lapping electrode. Each micromirror in the array can be independently controlled to direct reflected light into or out of the collection aperture of the lens. The micromirror can rotate + -12 deg. around the torsion arm. The imaging of the digital micromirror is completed by the rotation of the micromirrors, each micromirror can rotate, the positions of the micromirrors are different, and the emergent angles of reflected light are different, so that each micromirror is equivalent to an optical switch. The micro mirror element is horizontally arranged, the projection lens is arranged on the normal line of the micro mirror element, if the incident light is incident at an incident angle of 24 degrees, the included angle between the reflected light and the normal line of the micro mirror is 24 degrees, at the moment, the reflected light can not enter the pupil of the projection lens, and only a small amount of light penetrates through the projection lens to reach the projection screen, which is generally called as a 'flat state'; under the condition that the incident direction of incident light and the position of the projection lens are not changed, when the micro-mirror element rotates 12 degrees (namely +12 degrees) clockwise, the included angle between emergent light and the incident light is 24 degrees, the emergent light is exactly in the same direction with the optical axis of the projection lens, almost all of the emergent light passes through the projection lens and is projected onto a projection screen, and a bright state is formed, namely an 'on' state; when the micromirror element rotates 12 degrees (i.e., -12 degrees) counterclockwise from the horizontal position, the incident light direction and the position of the projection lens are unchanged, the angle between the emergent light and the incident light is 48 degrees, and the emergent light is far away from the pupil of the lens at this time, so that a dark state appears on the projection screen, which is called an "off" state. The micro-mirror element has a rotation angle of +/-12 degrees, and can effectively control the on state and the off state of incident light so as to ensure higher contrast. Because the torsion arm beam is very thin, the micro reflector has very light weight and very small moment of inertia, the response time is very fast and is about 5 mu s from a complete 'on' state to a complete 'off' state, and the frame frequency can be very high by using the micro mirror as a dynamic scene generator. Digital micromirrors have now demonstrated many advantages in the field of visible light projection. In the aspect of weapon system operation, it is common to replace the windows of the digital micromirrors to make them able to transmit the light wave of the desired wavelength band, and to maintain the operability of all micromirrors based on this. Digital micromirror technology can be used to produce visible, ultraviolet, infrared, and laser imaging targets. DMD type selection: the micro-mirror array selects a special DMD chip produced by American TI company, the chip model is 1076-. The DMD chip can meet the requirement of penetrating through a laser band by replacing the window, and the window adopts a special packaging process to guarantee sealing effect and product service life. Driving by DMD: the DMD driver module includes two parts: one hardware system is a hardware system, and the hardware system mainly comprises a main control chip FPGA, a DMD reset drive control chip DAD2000 and a DDR2 memory. The FPGA adopts a xilinx V5 series high-speed FPGA, and is provided with multiple clock inputs, multiple PLL (phase locked loop) rings and a high-capacity on-chip logic unit, and the highest clock frequency can reach 1 Ghz; the DMD reset drive control chip is a special chip DAD2000 of TI company, and can provide high-speed differential working clock and data for the DMD micro-mirror. After the digital image passes through the image distance separation module, the image information is sent to a special FPGA chip for DMD drive in a certain format to be collected and stored in a DDR2 internal memory, the image data and a DMD control time sequence signal generated inside the FPGA are transmitted to a DAD2000 chip according to an external synchronous trigger signal, and the FPGA and the DAD2000 jointly complete bottom layer DMD drive. And the second software system is designed by adopting a VHDL hardware description language, so that the accuracy and stability of the control time sequence are ensured. Firstly, receiving the graphic information of the graphic distance separation module, storing the video signal into a DDR2 memory, and generating a memory read command and a DMD control command according to external synchronous triggering. And reading image data and simultaneously matching with a DMD reset and turnover command to display image information to the DMD. The first generation DMD drive system adopts an integrated design, integrates video decoding, image processing, DMD drive and the like on one circuit board, reduces the influence of discrete devices on the overall stability of a product, and has high integration level and good stability. The second generation DMD drive system adopts a miniaturized design, has higher integration level, reduces the cost after batch production, and integrates all requirements of a dynamic target laser echo generation system on images, control, space and volume. The weight and the volume of the dynamic laser echo generating system can be minimized, and the stability is better.
The synchronous delay module is mainly used for receiving a trigger signal of the seeker, calculating DMD switch delay and laser pulse delay according to the trigger signal and modeling detection distance information given by the graph distance separation module, and sending the DMD switch delay and the laser pulse delay to the laser and the DMD drive circuit respectively. Controlling DMD turnover and laser emission. The synchronous delay control module adopts a series FPGA V7 of xilinx company as a main control chip, the highest clock frequency of the FPGA can reach 1000Mhz, and the highest stable delay with the precision of 1s can be realized. The module adopts Verilog HDL language to perform bottom gate level operation, and the overall real-time performance of the system is improved.
The laser system adopts an LD pumping air-cooled laser, and has two working modes, namely internal control and external control. When the laser works in the internal control mode, laser pulses with fixed frequency and fixed pulse width can be emitted. When the laser works in the external control mode, the laser can receive TTL outgoing signals, and the laser frequency controller triggers the laser to emit laser pulses.
According to the single pulse energy at the end of the fibre>The requirement of 0.1mJ, through reverse calculation and stability consideration, an LD pumping and air cooling mode is adopted, and the energy jitter is kept within +/-5 percent RMS while the energy requirement is met. Restarting the laser after cooling down requires a laser stabilization preheating process, i.e. a preheating time of 10 minutes. The pulse is transmitted at a fixed frequency of 100Hz, and after the continuous working for 1 hour, the stability of the output energy of the laser does not exceed +/-5 percent RMS. The semiconductor laser has small volume, light weight and high quantum efficiency due to direct electrical injection, so that the diode-pumped solid laser can effectively select pumping wavelength corresponding to the absorption spectrum of the laser medium. YAG laser's total efficiency of lamp pumping is usually less than 3%, and the thermal effect is obvious under high power, and beam quality and stability are poor. Generally speaking (taking Nd: YAG laser as an example), the electro-optic conversion efficiency of LD/LDA is 30% -50%, and the light-light conversion efficiency on ND: YAG is about 40%. Thus, the total efficiency of the LD/LDA pumped ND: YAG laser is above 10%. Secondly, because the flow fluctuation noise of liquid or gas working substances and the plasma fluctuation noise of the pumping lamp do not exist, the noise characteristic of the solid laser of the LD pump is more than one order better than that of the lamp pump, and the frequency is stable. All-solid-state lasers with LD/LDA pumps as pump sources also have characteristics that are not comparable to other types of lasers, such as: (1) the service life is long. The LD life can reach 5k hours or more, and does not need to be replaced frequently. (2) The thermo-optical distortion is small. (3) The beam quality is good. (4) The reliability is strong. Is 100 times higher than that of a lamp pump. (5) The weight is light. (6) The structure is simple. And the like. Energy and energyJitter, which can be determined by observing the energy meter reading, can be detected by the energy meter. Finally controlling the pulse width to be between 15ns +/-2 ns through selection of parameters or service performance of each optical component; the factors that determine the pulse width are mainly: the energy storage height of the crystal rod is high and low; determining the length of a resonant cavity; the output transmittance is high or low; the extinction ratio is high and low; fast and slow of Q-switch. In the development process, the combined content is determined by the above factors, the comprehensive stability and efficiency. The laser frequency controller is controlled by an external synchronizer and mainly comprises a circuit board and a shell. The circuit board comprises a power module, a main control module, a signal sending and receiving module and a communication module. Functional description of the power supply module: the whole circuit board must provide a stable 12V DC 3A power supply environment, and the power supply module comprises a main system part power supply scheme and an isolation system part power supply scheme. The main system part power supply scheme has the advantages that 12V input is converted into 5V, 3.3V and 1.2V voltage, wherein the 5V voltage is 2-level conversion voltage, the stability of the whole power supply system can be improved, 3.3V is used for supplying power to the main control chip FPGA _ VCCIO and FPGA _ AUX, the signal sending and receiving module and the communication module isolation end are used for supplying power internally, and 1.2V is used for supplying power to the main control chip FPGA _ INT. The 12V input of the isolated system part power supply scheme is converted into 5V and 3.3V voltage, wherein the 5V voltage is 2-level conversion voltage, the stability of the whole power supply system can be improved, and the 3.3V power supply is supplied to the outside of the signal transmitting and receiving module and the isolated end of the communication module. The function description of the main control module: the main control module is a Spartan-6 FPGA chip produced by Xilinx, and the Spartan-6 FPGA is mainly characterized by low cost and low power consumption. The sixth generation Spartan series are based on low power consumption 45nm, 9 metallic copper layer, dual gate oxide layer process technology, and advanced power consumption management technology. This family contains 150000 logic cells, integrated PCI Express modules, advanced storage support, 250Mhz DSP Slice and 3.125Gbps low power transceivers. The chip PIN is selected from a Spartan-6 chip and a 144PIN PIN. The functional design of the product is satisfied from the perspective of the logic unit and the pin. The main control system comprises 7 parts including an FPGA chip, an LED group, a reset circuit, 32PIN _ IO, a JTAG programming port, a FLASH chip and a crystal oscillator circuit. The signal sending and receiving module function description:the signal sending and receiving module has the function that the FPGA _ IO PIN is optically coupled and isolated through the TLP183 to receive a laser TTL signal or send the laser TTL signal, and the signal sending and receiving module is optically coupled and isolated through the optocoupler. And (3) describing functions of the communication module: the communication module has the functions of formulating a serial port protocol by receiving a command sent by an upper computer, sending the command according to the serial port protocol, and sending the command to adjust the duty ratio of the laser; the communication module adopts ADM2687E from ADI company, and the product characteristics are as follows: ADM2682E is a complete integrated 5kV rm signal and power isolation data transceiver with a +/-15 kV ESD protection function, and is suitable for high-speed communication application on a multipoint transmission line. ADM2682E integrates a 5kV rms isolated DC/DC power supply, and omits an external DC/DC isolation module. The device is designed for balanced transmission lines and conforms to ANSI TIA/EIA-485-A-98 and ISO 8482:1987(E) standards. The device being integrated with ADI
Figure BDA0003647636320000141
The technique integrates a 3-channel isolator, a tri-state differential line driver, a differential input receiver, and the ADI corporation isoPoweDC-DC converter in a single package. They adopt 5V or 3.3V single power supply to supply power, and realize the completely integrated signal and power isolation RS-485 solution. ADM2682E has an active high enable feature. In addition, the receiver has a low-level effective receiver enabling characteristic, and when the receiver is forbidden, the receiver output can enter a high-impedance state. These devices have current limiting and thermal shutdown characteristics that prevent output shorts and bus contention resulting in excessive power consumption. The rated temperature range is industrial temperature.
The implementation mode also comprises auxiliary electric control, other auxiliary electric control systems are designed according to the application of equipment and the requirement of technical indexes, and the system mainly comprises a switch, an indicator light, a power supply, a relay, electric control cabinet wiring, an aviation plug and the like.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (7)

1. The laser three-dimensional dynamic scene simulation system is characterized by comprising a modeling system and a laser echo generation system;
the modeling system is used for modeling a three-dimensional scene and simulating the laser reflectivity of the three-dimensional scene according to a detected laser radar detection target and a background to obtain a three-dimensional scene target laser reflectivity simulation image, slicing the obtained three-dimensional scene target laser reflectivity simulation image according to a set detection distance resolution gate, combining the sliced image with corresponding detection distance information, storing the sliced image in an RGB (red, green and blue) channel of the image according to a time sequence, and sending the combined sliced image to the laser echo generation system;
and the laser echo generating system is used for decoding the received image data to obtain a time sequence image, respectively extracting graphic information and detection distance information according to the decoding time sequence, generating a laser echo corresponding to the graphic information according to the detection distance information under the control of a trigger signal of the detected laser radar seeker, and sending the laser echo to the detected laser radar.
2. The laser three-dimensional dynamic scene simulation system of claim 1,
the modeling system comprises a three-dimensional modeling module, a control interaction module, a simulation module and a data generation module;
the control interaction module is used for inputting scene simulation parameters and laser simulation parameters; the scene simulation parameters comprise target attitude, position and target speed;
the system comprises a three-dimensional modeling module, a transmission medium module and a data processing module, wherein the three-dimensional modeling module is used for establishing a three-dimensional geometric model, a target surface characteristic model and a transmission medium model of a target and a background according to a detected laser radar detection target and the background, and the target surface characteristic model is used for simulating the laser reflectivity of the target under the control of laser simulation parameters according to the surface texture and the material of the target under the condition of natural illumination; the transmission medium model is used for determining the attenuation degree of laser according to the environmental background radiation and the atmospheric environment change and simulating the laser reflectivity of the background;
the simulation module is used for carrying out three-dimensional scene simulation according to the scene simulation parameters and the established model to obtain a three-dimensional scene simulation image;
the data generation module is used for determining the laser reflectivity of the target and the background according to the laser simulation parameters by combining the established surface characteristic model and the established transmission medium model, dividing the laser reflectivity into gray levels, performing gray level simulation on the three-dimensional scene target and acquiring a three-dimensional scene reflectivity simulation image; the method comprises the steps of binarizing a three-dimensional scene reflectivity simulation image, performing two-dimensional slicing on the binarized image according to a detection range gate emitting laser, grouping the sliced images, storing the sliced images in an RGB channel of the image according to time sequence, storing detection range information of the sliced images in other channels of the RGB channel according to time sequence, completing image reconstruction, taking the storage time sequence of the sliced images and the detection range information as a reconstruction protocol, and sending reconstructed image data to a laser echo generation system.
3. The laser three-dimensional dynamic scene simulation system according to claim 2, wherein in the data generation module, the reflectivity is divided into gray levels, and gray simulation is performed on the three-dimensional scene to obtain a three-dimensional scene reflectivity simulation image, and the method comprises:
dividing the gray scale into 65 levels, wherein the gray scale is 0 under the condition of no target and no background, and the gray scale is 64 when the reflectivity is the highest as the reflectivity is increased and the gray scale is higher;
if the gray scale of one pixel is a, the single pixel is expanded into 8 multiplied by 8 pixels, a pixels are arranged in a rotating mode from the center, the reflectivity simulation of the single pixel is completed, and a is 0 to 64.
4. The laser three-dimensional dynamic scene simulation system according to claim 1, wherein the laser echo generation system comprises a graph distance separation module, a synchronous delay module, a DMD driving system, a laser and a collimation optical system;
the image distance separation module is used for receiving the reconstructed image data, decoding the received image data to obtain a time sequence image of the image data, respectively extracting graphic information and detection distance information from a data stream according to a storage time sequence in a reconstruction protocol, sending the graphic information to the digital DMD driving system, and sending the detection distance information to the synchronous delay module;
the synchronous delay module is used for receiving a trigger signal of the detected laser radar seeker, calculating the switch delay of the digital DMD driving system and the pulse delay of the laser according to the detection distance information of the trigger signal and the graph distance separation module, and respectively sending the switch delay of the digital DMD driving system and the pulse delay of the laser to the digital DMD driving system and the laser;
the laser emitted by the laser is incident to a micromirror DMD array of the DMD driving system;
the micromirror DMD array images according to the received graphic information and laser incident from the laser, and emits laser to the collimating optical system according to the imaging, and the collimating optical system emits the laser as a laser echo signal to the laser radar to be detected.
5. The laser three-dimensional dynamic scene simulation system according to claim 4, wherein the reconstructed image data is transmitted through a DVI transmission physical protocol, the image distance separation module comprises a hardware decoding module and a software decoding module, wherein the hardware decoding module is performed by adopting a decoding chip with a model number THC63DV16, and corresponding synchronous signals necessary for RGB data and video transmission are decoded;
and the software decoding is completed by adopting a high-speed FPGA as a main control chip and a VHDL hardware description language, and the image information and the detection distance information of the data stream are separated according to the storage time sequence in the reconstruction protocol.
6. The laser three-dimensional dynamic scene simulation system according to claim 1, wherein the acquisition of the scene simulation parameters comprises self-setting or real-time acquisition through a network communication mode.
7. The laser three-dimensional dynamic scene simulation system of claim 1, further comprising a control information monitoring module;
and the control information monitoring module is used for monitoring and displaying the interactive control log, the scene simulation log and the image processing log in the simulation process.
CN202210535240.6A 2022-05-17 2022-05-17 Laser three-dimensional dynamic scene simulation system Active CN115015888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210535240.6A CN115015888B (en) 2022-05-17 2022-05-17 Laser three-dimensional dynamic scene simulation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210535240.6A CN115015888B (en) 2022-05-17 2022-05-17 Laser three-dimensional dynamic scene simulation system

Publications (2)

Publication Number Publication Date
CN115015888A true CN115015888A (en) 2022-09-06
CN115015888B CN115015888B (en) 2023-01-17

Family

ID=83069091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210535240.6A Active CN115015888B (en) 2022-05-17 2022-05-17 Laser three-dimensional dynamic scene simulation system

Country Status (1)

Country Link
CN (1) CN115015888B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115810320A (en) * 2023-02-08 2023-03-17 山东云海国创云计算装备产业创新中心有限公司 Cooperative control method, system, equipment and storage medium for gray scale image display
CN117332585A (en) * 2023-09-27 2024-01-02 奕富通集成科技(珠海横琴)有限公司 Simulation model modeling method and system for time-of-flight ranging laser radar

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5282014A (en) * 1992-12-11 1994-01-25 Hughes Aircraft Company Laser rangefinder testing system incorporationg range simulation
CN104049259A (en) * 2014-07-01 2014-09-17 南京大学 Lidar three-dimensional imaging system based on virtual instrument
CN108387907A (en) * 2018-01-15 2018-08-10 上海机电工程研究所 Flash-mode laser radar echo signal physical image simulation system and method
CN109031250A (en) * 2018-06-12 2018-12-18 南京理工大学 It is a kind of to emit quantitative detection system in servo-actuated laser radar performance room
CN112698350A (en) * 2020-12-09 2021-04-23 北京机电工程研究所 Laser active imaging radar target echo signal simulation system and method
CN112904353A (en) * 2021-01-20 2021-06-04 南京理工大学 Laser radar distance signal simulation method and simulation signal generator
CN114895288A (en) * 2022-05-10 2022-08-12 哈尔滨方聚科技发展有限公司 Laser echo generation system for three-dimensional scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5282014A (en) * 1992-12-11 1994-01-25 Hughes Aircraft Company Laser rangefinder testing system incorporationg range simulation
CN104049259A (en) * 2014-07-01 2014-09-17 南京大学 Lidar three-dimensional imaging system based on virtual instrument
CN108387907A (en) * 2018-01-15 2018-08-10 上海机电工程研究所 Flash-mode laser radar echo signal physical image simulation system and method
CN109031250A (en) * 2018-06-12 2018-12-18 南京理工大学 It is a kind of to emit quantitative detection system in servo-actuated laser radar performance room
CN112698350A (en) * 2020-12-09 2021-04-23 北京机电工程研究所 Laser active imaging radar target echo signal simulation system and method
CN112904353A (en) * 2021-01-20 2021-06-04 南京理工大学 Laser radar distance signal simulation method and simulation signal generator
CN114895288A (en) * 2022-05-10 2022-08-12 哈尔滨方聚科技发展有限公司 Laser echo generation system for three-dimensional scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘敏: "基于距离选通技术的激光三维成像系统", 《中国优秀硕士学位论文全文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115810320A (en) * 2023-02-08 2023-03-17 山东云海国创云计算装备产业创新中心有限公司 Cooperative control method, system, equipment and storage medium for gray scale image display
WO2024165055A1 (en) * 2023-02-08 2024-08-15 山东云海国创云计算装备产业创新中心有限公司 Cooperative control method, system and device for grayscale image display
CN117332585A (en) * 2023-09-27 2024-01-02 奕富通集成科技(珠海横琴)有限公司 Simulation model modeling method and system for time-of-flight ranging laser radar
CN117332585B (en) * 2023-09-27 2024-06-07 奕富通集成科技(珠海横琴)有限公司 Simulation model modeling method and system for time-of-flight ranging laser radar

Also Published As

Publication number Publication date
CN115015888B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
CN115015888B (en) Laser three-dimensional dynamic scene simulation system
CN114895288B (en) Laser echo generation system for three-dimensional scene
CN114898037B (en) Laser three-dimensional dynamic scene modeling system and modeling method
US10469831B2 (en) Near-instant capture of high-resolution facial geometry and reflectance
CN103885280B (en) Real three-dimensional display system based on mixing screen and method
CN102519307B (en) Ultraviolet-infrared dynamic scene simulator
László Monte-carlo methods in global illumination
US20190293937A1 (en) Augmented reality display device and method, and augmented reality glasses
Beasley et al. Dynamic infrared scene projectors based upon the DMD
CN208172809U (en) Image acquiring device, image reconstruction device, identity recognition device, electronic equipment
CN113534596A (en) RGBD stereo camera and imaging method
EP3734350A1 (en) Laser beam scanning display device, and augmented reality glasses
CN105973498A (en) Arc transient temperature field testing system based on spectral imaging method
CN108711186A (en) Method and apparatus, identity recognition device and the electronic equipment of target object drawing
CN107167998A (en) Space two waveband is combined dynamic scene projection simulation system
CN104735429B (en) Display device and display methods
CN109540469A (en) A kind of multichannel real-time optical target simulation system and semi-physical emulation platform
US20220003875A1 (en) Distance measurement imaging system, distance measurement imaging method, and non-transitory computer readable storage medium
CN106131517A (en) A kind of coloured image acquisition methods
Klein et al. A calibration scheme for non-line-of-sight imaging setups
CN105761301B (en) Three-dimensional laser scanning system and its colour point clouds image method for building up
Malik et al. Flying With Photons: Rendering Novel Views of Propagating Light
Tsao et al. Volumetric display by moving screen projection with a fast-switching spatial light modulator
Zhang et al. An FPGA-based on-chip system for real-time single pixel imaging
CN113125006B (en) Light source module optical power measurement system, optical power measurement method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant