CN115375890A - Based on four mesh stereovision cameras governing system of 5G - Google Patents
Based on four mesh stereovision cameras governing system of 5G Download PDFInfo
- Publication number
- CN115375890A CN115375890A CN202211307924.7A CN202211307924A CN115375890A CN 115375890 A CN115375890 A CN 115375890A CN 202211307924 A CN202211307924 A CN 202211307924A CN 115375890 A CN115375890 A CN 115375890A
- Authority
- CN
- China
- Prior art keywords
- image
- data
- unit
- module
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004891 communication Methods 0.000 claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000004364 calculation method Methods 0.000 claims abstract description 36
- 238000007781 pre-processing Methods 0.000 claims abstract description 23
- 230000005540 biological transmission Effects 0.000 claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims abstract description 12
- 230000000694 effects Effects 0.000 claims abstract description 9
- 238000005516 engineering process Methods 0.000 claims description 26
- 238000004422 calculation algorithm Methods 0.000 claims description 23
- 238000000034 method Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 18
- 230000004927 fusion Effects 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000012958 reprocessing Methods 0.000 claims description 4
- 230000003068 static effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000013178 mathematical model Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000001454 recorded image Methods 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 239000000969 carrier Substances 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 229910052732 germanium Inorganic materials 0.000 description 2
- GNPVGFCGXDBREM-UHFFFAOYSA-N germanium atom Chemical compound [Ge] GNPVGFCGXDBREM-UHFFFAOYSA-N 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- HOWHQWFXSLOJEF-MGZLOUMQSA-N systemin Chemical compound NCCCC[C@H](N)C(=O)N[C@@H](CCSC)C(=O)N[C@@H](CCC(N)=O)C(=O)N[C@@H]([C@@H](C)O)C(=O)N[C@@H](CC(O)=O)C(=O)OC(=O)[C@@H]1CCCN1C(=O)[C@H]1N(C(=O)[C@H](CC(O)=O)NC(=O)[C@H](CCCN=C(N)N)NC(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H]2N(CCC2)C(=O)[C@H]2N(CCC2)C(=O)[C@H](CCCCN)NC(=O)[C@H](CO)NC(=O)[C@H](CCC(N)=O)NC(=O)[C@@H](NC(=O)[C@H](C)N)C(C)C)CCC1 HOWHQWFXSLOJEF-MGZLOUMQSA-N 0.000 description 1
- 108010050014 systemin Proteins 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/955—Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Graphics (AREA)
- Vascular Medicine (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a 5G-based four-eye stereoscopic vision camera adjusting system, in particular to the technical field of physical adjustment control of non-electrical variables, which comprises a 5G communication module, wherein the 5G communication module is a module integrating a 5G chip, a memory, a radio frequency circuit and a positioning system and is used for information transmission, exchange and communication, the 5G communication module is connected with a 5G network through an interface and plays a role in accurate, quick and efficient communication for information transmission between software and hardware, and an image acquisition data preprocessing module consists of an image sensor, a memory, an editable controller and a processor; the invention specifically adopts the three-dimensional reconstruction calculation control module and the stereoscopic vision calibration module to calibrate and calculate the target object of stereoscopic vision, and adopts the AI stereoscopic vision adjusting module to perform artificial intelligent adjustment processing, thereby realizing the function of artificially and intelligently adjusting the stereoscopic vision and achieving the effects of improving the calculation capacity, intelligently identifying, tracking, measuring, monitoring and reducing the difference between coordinate conversion data and a standard value.
Description
Technical Field
The invention relates to the technical field of physical regulation and control of non-electric variables, in particular to a camera regulation system based on 5G four-eye stereoscopic vision.
Background
With the rise of the 5G technology, the intelligent system combining the technology provides the advantages of high precision, fast response speed, low cost and easy integration by virtue of the characteristics of low time delay and large bandwidth of the 5G, and compared with the 4G network technology, the system has the advantages of remarkable improvement in information acquisition, data detection, video transmission, analysis processing and adjustment of control machines; the four-eye stereoscopic vision camera senses a three-dimensional structure in a scene by using a camera set with four lenses, acquires a plurality of images from different viewpoints to reconstruct the three-dimensional structure of a target object, and the stereoscopic vision measurement is to acquire image information of a spatial object by the four-eye camera, calculate a coordinate point of a corresponding pixel position in the image by using a computer according to a known three-dimensional coordinate of the spatial position of the object surface, so that the 5G-based four-eye stereoscopic vision camera adjusting system can be widely applied to the fields of the current developed industry, agriculture, service industry and manufacturing industry, and is used for three-dimensional measurement, target identification, object positioning and intelligent monitoring.
The existing adjusting system based on a 5G four-eye stereoscopic vision camera is a system which uses a software architecture on hardware, wherein the software comprises a 5G communication module, an image acquisition module, a stereoscopic vision calibration module and a data processing control module; the hardware comprises a computer, a server, a four-eye camera set, a frequency doubling synchronizer and an illuminating lamp, and the current adjusting system tracks a target object to adjust by changing a camera bracket base and manually adjusts the camera to acquire image information of the space object.
Therefore, the existing adjusting system based on the 5G four-eye stereoscopic vision camera has the problems that an adjusting mode lacks an artificial intelligence algorithm, and the three-dimensional reconstruction computing power is slow; and the problem of difference between intelligent identification, tracking, measurement, monitoring and coordinate conversion data and a standard value.
Disclosure of Invention
In order to overcome the above defects in the prior art, embodiments of the present invention provide a camera adjusting system based on 5G four-eye stereoscopic vision, which uses a three-dimensional reconstruction calculation control module and a stereoscopic vision calibration module to calibrate and calculate a stereoscopic target object, and uses an AI stereoscopic vision adjusting module to perform artificial intelligence adjustment processing, so as to solve the problems proposed in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a camera adjusting system based on 5G four-eye stereoscopic vision is formed by a software and hardware framework, wherein the hardware framework comprises a computer, a four-eye camera set, an illuminating lamp set and a frequency doubling synchronizer; the software comprises a 5G communication module which is a module integrating a 5G chip, a memory, a radio frequency circuit and a positioning system and is used for information transmission, exchange and communication, and the software is connected with a 5G network through an interface to play a role in accurate, rapid and efficient communication for information transmission between software and hardware;
the image acquisition data preprocessing module consists of an image sensor, a memory, an editable controller and a processor, scans and shoots a target object through a four-eye camera set, acquires image information for data processing, and transmits the processed data information to the stereoscopic vision calibration module and the three-dimensional reconstruction calculation control module for data calculation and reprocessing;
the three-dimensional reconstruction calculation control module receives the image data processed by the image acquisition data preprocessing module, adopts a three-dimensional reconstruction technology to recalculate the image data, calculates through a calculation algorithm to obtain standard data of the attached image, and transmits the three-dimensional reconstruction image to the stereoscopic vision calibration module to calibrate the target object;
the stereoscopic vision calibration module is used for receiving the image information acquired by the image acquisition data preprocessing module and the data calculated by the three-dimensional reconstruction calculation control module, performing stereoscopic vision calibration, calculating a world coordinate system, a pixel coordinate system, a camera coordinate system and an image coordinate system of the target object, and transmitting the calculation result to the computer and the AI stereoscopic vision adjusting module for intelligent regulation and control;
and the AI adjusting stereoscopic vision module receives the data information calibrated by the stereoscopic vision calibration module and an adjusting instruction issued by the computer, adjusts the light shadow and the pose of the four-eye camera set by locking a target object to identify, track, measure and detect data results, and trains and optimizes the adjusted data through a deep learning technology.
In a preferred embodiment, the computer in the hardware is an operating terminal based on computer technology, 5G wireless network communication technology, remote operation technology and system processing data, and the computer is a device connected with a software system server and consists of a processor, a memory and input and output devices; the four-eye camera set is a camera consisting of four independent lenses, can adjust the angle to cover the panoramic view of a three-dimensional space, the four lenses are displayed in the middle of each surface of a regular tetrahedron, the scanned image of each camera is 360 degrees of the plane where the camera is located, the overlapped part of the images can be automatically identified after the four lenses are scanned, and the images are output after being corrected and spliced by a microprocessor in the camera; the lighting lamp group is controlled by a power supply and a computer, and is used for collecting a target object for the four-eye camera group, performing light field compensation and improving collected image information; the frequency multiplication synchronizer is a circuit which is arranged in the four-eye camera set and used for a carrier recovery circuit, wherein the frequency of an output signal is equal to the integral multiple of the frequency of an input signal, and the frequency multiplication synchronizer is used for improving the frequency stability.
In a preferred embodiment, the 5G communication module includes a 5G communication chip, a memory interface and a radio frequency circuit, the 5G communication chip is a chip connected to a 5G high-speed data service, a 5G communication technology with characteristics of high speed, low time delay and large connection is adopted, an information transmission rate reaches 1Gbps, the time delay is as low as 1ms, and a network information transmission rate is increased; the memory interface is provided with a memory interface on the sequential logic circuit to be connected with the pins of the 5G communication chip and the radio frequency circuit, and data transmission is carried out through the interface for registering and reading and writing input and output signals; the radio frequency circuit comprises a transceiver used for receiving and transmitting signals in wireless communication, and in the process of the transceiver, baseband signals to be transmitted are modulated onto communication carriers, amplified and frequency-converted, and then transmitted out radio frequency signals through a 5G wireless communication network; in the process of the receiver, the wireless receiving signal is used for carrying out amplification and frequency conversion processing, then the radio frequency signal is demodulated, and a baseband output signal is received.
In a preferred embodiment, the image acquisition data preprocessing module comprises a CMOS sensing unit, an SRAM storage unit, an FPGA control unit, and an image data processing unit, wherein the CMOS sensing unit is a semiconductor chip with positive and negative electrodes prepared by using silicon and germanium elements, and the current generated by complementary effect on the chip is processed, recorded and interpreted into image information, and the four-eye camera set is an image acquisition sensor capable of converting optical signals of an acquired image into electrical signals and converting the acquired image data into digital data for capturing the image; the SRAM memory cell can support storing, updating and keeping image data information without a refreshing circuit in a power-on state, the SRAM consists of a memory cell array, a row and column address decoder, a sensitive amplifier, a control circuit and a buffer driving circuit, and a bistable circuit is used as a memory cell; the FPGA control unit integrally controls the CMOS sensing unit and the SRAM storage unit through an editable logic block, an input/output block and interconnection wiring resources, can process an image processing algorithm, edit a control code and perform simulation debugging, and processes data acquisition by using logic control; the image data processing unit is used for carrying out processes of denoising, image segmentation, enhancement, restoration and marginalization on the acquired image data.
In a preferred embodiment, the three-dimensional reconstruction calculation control module comprises an acquisition depth data unit, a data preprocessing unit, a point cloud registration, fusion, positioning and attitude determination unit, a texture mapping generation surface unit and an AI algorithm control unit, wherein the acquisition depth data unit acquires depth information of an object in an image by means of light source transformation sensing pixels, measuring light emission return time and triangular distance measurement of infrared rays; the data preprocessing is to establish the position information of the real target object obtained from the depth data obtaining unit into a mathematical model conforming to the logical expression of a computer; the point cloud registration, fusion, positioning and attitude determination unit is used for recording three-dimensional coordinate points in a model after data preprocessing through coordinate precision, spatial resolution and surface normal vectors, the recorded images are data information of positioning and attitude determination, and the image coordinate point information after point cloud recording is stored in a PCD format; the texture mapping generation surface unit generates an image surface by a texture mapping technology of image processing after the steps of point cloud calculation, registration and fusion in the point cloud registration, fusion, positioning and attitude determination unit are completed; the AI algorithm control unit calculates four coordinate coefficient values of the image of the point cloud data of the depth map, calculates and converts corresponding three-dimensional coordinates through a coordinate point conversion formula in the prior art, and controls the calculation power by editing the AI algorithm in the FPGA and calculating the image coordinate data.
In a preferred embodiment, the stereoscopic vision calibration module is used for calibrating the process of converting the target object from a world coordinate system to an image coordinate system for the four-eye camera group, and comprises a distortion coefficient, an external reference calibration, an internal reference calibration, a target checkerboard and a coordinate unit for calculating a calibration position, wherein the distortion coefficient is the distortion caused by the deviation of the position of a lens of the four-eye camera group from a light ray to an imaging plane, the numerical values between the four coordinate systems are calculated through an AI algorithm, the distortion quantity is calculated, and a radial tangential distortion model is established to calculate the distortion coefficient; the external reference calibration is a parameter of a calibration model of the four-eye camera set and comprises a rotation matrix and a translation vector; the internal reference calibration is the conversion from three-dimensional points of the image to two-dimensional point coordinates; the target checkerboard is a checkerboard file with an input specified format, internal and external parameter calibration is carried out through computer calculation for extracting corner points and target images, the pose of the target images relative to the four-eye camera set is obtained, and the pose coordinate point data of a target object is calculated; the unit for calculating the coordinate of the calibration position calculates the data of the coordinate point of the calibration position according to the existing four coordinate system formulas.
In a preferred embodiment, the AI-adjusting stereoscopic vision module comprises an intelligent identification target unit, a target object tracking unit, an AI obstacle detection unit, a camera shadow and pose adjustment unit, and an AI visual deep learning unit, wherein the intelligent identification target unit reads reconstructed image information data through a computer to perform processing, analysis and understanding of extracted characteristic quantities, and performs image identification by adopting a deep learning algorithm; the tracking target object unit adopts the principle that a solar cell obtains the maximum power of light energy and automatically tracks a light source, an independent target object is locked by scanning the edge outline, the shortest distance measurement, the concave-convex characteristics of the surface and the central position point of the target object through a four-eye camera set, when the target object moves, the four-eye camera set adopts the artificial intelligent tracking technology, and when the target object is dispersed, the full-coverage type of the four-eye camera set is adopted to record a multi-target running track for tracking, so that the tracking effect is achieved; the AI detection obstacle unit detects obstacles in tracking the target object and detects the state of the adjustment process of the four-eye camera set in the state of removing dynamic and static obstacles of the target object, and is used for intelligently determining the adjustment system; the camera shadow and pose adjusting unit irradiates a target object through the shape and the strength of the shadow of the lighting lamp group, compares the coordinate system result calculated by the calibration image by adopting the position and the angle of the movable four-eye camera group, and judges and adjusts the stereoscopic vision effect; the AI deep vision learning unit adopts an artificial intelligence technology, image data collected by the four-eye camera set is marked with image data and trained with a neural network, and the stereoscopic vision of the image is adjusted through the trained neural network.
The invention has the technical effects and advantages that:
the invention specifically adopts the three-dimensional reconstruction calculation control module and the stereoscopic vision calibration module to calibrate and calculate the target object of stereoscopic vision, and adopts the AI regulation stereoscopic vision module to carry out artificial intelligent regulation processing, thereby realizing the function of artificially and intelligently regulating stereoscopic vision, and achieving the effects of improving calculation capacity, intelligently identifying, tracking, measuring and monitoring, and reducing the difference between coordinate conversion data and a standard value.
Drawings
Fig. 1 is a block diagram of a 5G-based four-eye stereoscopic vision camera adjusting system of the invention.
Fig. 2 is a schematic diagram of a 5G communication module of the present invention.
FIG. 3 is a schematic diagram of an image acquisition data preprocessing module of the present invention.
FIG. 4 is a schematic diagram of a three-dimensional reconstruction calculation control module of the present invention.
FIG. 5 is a schematic diagram of a stereo vision calibration module of the present invention.
Fig. 6 is a schematic diagram of the AI-adjusted stereo vision module of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The embodiment provides a system based on a 5G four-eye stereoscopic vision camera adjusting system as shown in fig. 1, which is constructed by software and hardware, wherein the hardware comprises a computer, a four-eye camera set, an illuminating lamp set and a frequency doubling synchronizer; the software comprises a 5G communication module which is a module integrating a 5G chip, a memory, a radio frequency circuit and a positioning system and is used for information transmission, exchange and communication, and the software is connected with a 5G network through an interface to play a role in accurate, rapid and efficient communication for information transmission between software and hardware;
the image acquisition data preprocessing module consists of an image sensor, a memory, an editable controller and a processor, scans and shoots a target object through the four-eye camera set, acquires image information for data processing, and transmits the processed data information to the stereoscopic vision calibration module and the three-dimensional reconstruction calculation control module for data calculation and reprocessing;
the three-dimensional reconstruction calculation control module receives the image data processed by the image acquisition data preprocessing module, adopts a three-dimensional reconstruction technology to recalculate the image data, calculates through a calculation algorithm to obtain standard data of the attached image, and transmits the three-dimensional reconstruction image to the stereoscopic vision calibration module to calibrate the target object;
the stereoscopic vision calibration module is used for receiving the image information acquired by the image acquisition data preprocessing module and the data calculated by the three-dimensional reconstruction calculation control module, performing stereoscopic vision calibration, calculating a world coordinate system, a pixel coordinate system, a camera coordinate system and an image coordinate system of a target object, and transmitting the calculation result to the computer and the AI regulation stereoscopic vision module for intelligent regulation and control;
and the AI adjusting stereoscopic vision module receives the data information calibrated by the stereoscopic vision calibration module and an adjusting instruction issued by the computer, adjusts the light shadow and the pose of the four-eye camera set by locking a target object to identify, track, measure and detect data results, and trains and optimizes the adjusted data through a deep learning technology.
As shown in fig. 1, in the embodiment, a computer in hardware is an operating terminal based on computer technology, 5G wireless network communication technology, remote operation technology and system processing data, and the computer is a device connected to a software system server and composed of a processor, a memory and an input/output device; the four-eye camera set is a camera consisting of four independent lenses, can adjust the angle to cover the panoramic view of a three-dimensional space, the four lenses are displayed in the middle of each surface of a regular tetrahedron, the scanned image of each camera is 360 degrees of the plane where the camera is located, the overlapped part of the images can be automatically identified after the four lenses are scanned, and the images are output after being corrected and spliced by a microprocessor in the camera; the lighting lamp group is controlled by a power supply and a computer, and is used for collecting a target object for the four-eye camera group, performing light field compensation and improving collected image information; the frequency multiplication synchronizer is a circuit which is arranged in the four-eye camera set and used for a carrier recovery circuit, wherein the frequency of an output signal is equal to the integral multiple of the frequency of an input signal, and the frequency multiplication synchronizer is used for improving the frequency stability.
As shown in fig. 2 in this embodiment, specifically, the 5G communication module includes a 5G communication chip, a memory interface, and a radio frequency circuit, where the 5G communication chip is a chip connected to a 5G high-speed data service, and adopts a 5G communication technology with characteristics of high speed, low delay, and large connection, and its information transmission rate reaches 1Gbps, and the delay is as low as 1ms, so as to improve the network information transmission speed; the memory interface is provided with a memory interface on the sequential logic circuit to be connected with the pins of the 5G communication chip and the radio frequency circuit, and data transmission is carried out through the interface for registering and reading and writing input and output signals; the radio frequency circuit comprises a transceiver used for receiving and transmitting signals in wireless communication, and in the process of the transceiver, baseband signals to be transmitted are modulated onto communication carriers, amplified and frequency-converted, and then transmitted out radio frequency signals through a 5G wireless communication network; in the process of the receiver, the wireless receiving signal is used for carrying out amplification frequency conversion processing, then the radio frequency signal is demodulated, and the baseband output signal is received.
As shown in fig. 3, in the embodiment, it is specifically described that the image acquisition data preprocessing module includes a CMOS sensing unit, an SRAM storage unit, an FPGA control unit, and an image data processing unit, where the CMOS sensing unit is a semiconductor chip with positive and negative electrodes prepared by using silicon and germanium elements, and the current generated by complementary effect on the chip is processed, recorded, and interpreted into image information, and the sensor for acquiring an image by the four-view camera set can convert an optical signal of the acquired image into an electrical signal, and convert the acquired image data into digital data for capturing the image; the SRAM memory cell can support storing, updating and keeping image data information without a refreshing circuit in a power-on state, the SRAM consists of a memory cell array, a row and column address decoder, a sensitive amplifier, a control circuit and a buffer driving circuit, and a bistable circuit is used as a memory cell; the FPGA control unit integrally controls the CMOS sensing unit and the SRAM storage unit through an editable logic block, an input/output block and interconnection wiring resources, can process an image processing algorithm, edit a control code and perform simulation debugging, and processes data acquisition by using logic control; the image data processing unit is used for carrying out the processes of denoising, image segmentation, enhancement, restoration and marginalization on the acquired image data;
the image processing method comprises the following specific steps:
s1, firstly, receiving an image analog signal collected by a four-eye camera set through an image data processing unit, converting the image signal into a digital signal through digital image processing, and reprocessing the digital signal by using a computer;
s2, acquiring required image data by adopting a technical means of digital image processing of a computer, such as geometric processing, gray level transformation, arithmetic processing, filtering and denoising processing, image enhancement and image restoration processing;
and S3, finally, carrying out image reconstruction and image identification processing on the digital signals calculated by the computer through modeling calculation.
As shown in fig. 4, in this embodiment, it is specifically described that the three-dimensional reconstruction calculation control module includes a depth data obtaining unit, a data preprocessing unit, a point cloud registration, fusion, positioning, and pose determining unit, a texture mapping generation surface unit, and an AI algorithm control unit, where the depth data obtaining unit obtains depth information of an object in an image by using a pixel sensed by light source transformation, a triangulation distance measuring unit that measures light emission time and infrared ray; the data preprocessing is to establish the position information of the real target object obtained from the depth data obtaining unit into a mathematical model conforming to the logical expression of a computer; the point cloud registration, fusion, positioning and attitude determination unit is used for recording three-dimensional coordinate points in a model after data preprocessing through coordinate precision, spatial resolution and surface normal vectors, the recorded images are data information of positioning and attitude determination, and the image coordinate point information after point cloud recording is stored in a PCD format; the texture mapping generation surface unit generates an image surface through a texture mapping technology of image processing after point cloud calculation, registration and fusion in the point cloud registration, fusion, positioning and attitude determination unit; the AI algorithm control unit calculates four coordinate coefficient values of the image of the point cloud data of the depth map, calculates and converts corresponding three-dimensional coordinates through a coordinate point conversion formula in the prior art, and controls the calculation power by editing the AI algorithm in the FPGA and calculating the image coordinate data.
FIG. 5 in the present embodimentThe stereoscopic vision calibration module is used for the process of converting a target object calibration from a world coordinate system to an image coordinate system for a four-eye camera group, and comprises a distortion coefficient, an external reference calibration, an internal reference calibration, a target checkerboard and a calibration position coordinate calculating unit, wherein the distortion coefficient is the distortion caused by the deviation of the position from the light transmitted by a lens of the four-eye camera group to an imaging plane, the numerical values among the four coordinate systems are calculated through an AI algorithm, the distortion quantity is solved, and a radial tangential distortion model is established to solve the distortion coefficient; the external reference calibration is a parameter of a calibration model of the four-eye camera set and comprises a rotation matrix and a translation vector; the internal reference calibration is the conversion from three-dimensional points of the image to coordinates of two-dimensional points; the target checkerboard is a checkerboard file with an input specified format, internal and external parameter calibration is carried out through computer calculation for extracting angular points and target images, the position and the pose of the target images relative to the four-eye camera set are obtained, and position and pose coordinate point data of a target object are calculated; the unit for calculating the coordinate of the calibration position calculates the data of the coordinate point of the calibration position according to the existing four coordinate system formulas, in particular to a world coordinate system of any point P in a set spaceAnd the coordinate system of the camera under the four-eye camera setThen, there is a world coordinate system converted into a camera coordinate systemWherein R is a rotation matrix, t is a translation vector, 0 is a transpose matrix, and the physical coordinate system of P isConversion of camera coordinate system to physical coordinate systemWherein f is the focal length, and the image coordinate system of P isConversion of physical coordinate system into image coordinate systemIn the formula, dX and dY are the corresponding physical width and height of each pixel of the image, and the unit is mm.
As shown in fig. 6, in the embodiment, it is specifically described that the AI adjustment stereo vision module includes an intelligent recognition target unit, a target object tracking unit, an AI obstacle detection unit, a camera shadow and pose adjustment unit, and an AI visual deep learning unit, where the intelligent recognition target unit reads reconstructed image information data through a computer to perform feature quantity extraction processing, analysis and understanding, and performs image recognition by using a deep learning algorithm; the tracking target object unit adopts the principle that a solar cell is adopted to obtain the maximum function of light energy and a high-efficiency autonomous tracking light source, the edge contour, the shortest distance measurement, the surface concave-convex characteristic and the central position point of a target object are scanned by a four-eye camera set to lock an independent target object, when the target object is moving, the four-eye camera set is used for tracking by an artificial intelligent tracking technology, and when the target object is dispersed, the full coverage type of the four-eye camera set is adopted to record a multi-target running track for tracking, so that the tracking effect is achieved; the AI detection obstacle unit detects obstacles for tracking the target object and detects the state of the adjustment process of the four-eye camera set in the state of removing the dynamic and static obstacles of the target object, and is used for intelligently determining the adjustment system; the camera shadow and pose adjusting unit irradiates a target object through the shape and the intensity of the shadow of the illuminating lamp group, compares the coordinate system result calculated by the calibration image by adopting the position and the angle of the movable four-eye camera group, and judges and adjusts the stereoscopic vision effect; the AI visual deep learning unit adopts an artificial intelligence technology, image data collected by the four-eye camera set is labeled and trained by the image data, and the stereoscopic vision of the image is adjusted by the trained neural network.
The method comprises the following specific steps of adjusting the light shadow to the pose:
a1, firstly, when a target object is scanned by a four-eye camera set, the shape, the illumination intensity and the illumination angle of an illumination lamp set are adjusted and controlled by a computer to be projected onto the target object, and pixel points of the target object collected by a camera are different in depth;
a2, when a target object is scanned through the four-eye camera set, according to the dynamic and static states of the object, the pose and the running track of the spatial position, the outline shadow of the target object and the image data of the four-eye camera set covering the target object, and the accuracy of coordinate comparison pose is carried out through intelligently calculating the coordinate system of the target object;
a3, performing stereoscopic vision regulation and control on the four-eye camera set through an AI regulation stereoscopic vision module;
and A4, finally, carrying out artificial intelligent optimization stereoscopic vision adjustment processing on the 5G four-eye stereoscopic vision camera adjustment system by using a training neural network of a deep learning algorithm.
And finally: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. The utility model provides a camera governing system based on four mesh stereovision of 5G which characterized in that: the system is formed by a software and hardware framework, wherein the hardware comprises a computer, a four-eye camera set, an illuminating lamp set and a frequency doubling synchronizer; the software comprises a 5G communication module which is a module integrating a 5G chip, a memory, a radio frequency circuit and a positioning system and is used for information transmission, exchange and communication;
the image acquisition data preprocessing module scans and shoots a target object through the four-eye camera set, acquires image information for data processing, and transmits the processed data information to the stereoscopic vision calibration module and the three-dimensional reconstruction calculation control module for data calculation and reprocessing;
the three-dimensional reconstruction calculation control module receives the image data processed by the image acquisition data preprocessing module, performs recalculation processing on the image data, performs calculation through a calculation algorithm to obtain standard data of the attached image, and transmits the three-dimensional reconstruction image to the stereoscopic vision calibration module for calibration of a target object;
the stereoscopic vision calibration module is used for receiving the image information acquired by the image acquisition data preprocessing module and the data calculated by the three-dimensional reconstruction calculation control module, performing stereoscopic vision calibration, calculating a target object coordinate system, and transmitting the calculation result to the computer and the AI regulation stereoscopic vision module for intelligent regulation and control;
and the AI adjusting stereoscopic vision module receives the data information calibrated by the stereoscopic vision calibration module and an adjusting instruction issued by the computer, and adjusts the light shadow and the pose of the four-eye camera set by locking a target object to identify, track, measure and detect data results.
2. The camera adjusting system based on 5G four-eye stereoscopic vision as claimed in claim 1, wherein: the computer in the hardware is an operation terminal for processing data based on computer technology, 5G wireless network communication technology, remote operation technology and system; the four-eye camera set is a camera consisting of four independent lenses, and can adjust the angle to cover the panoramic view of a three-dimensional space; the lighting lamp group is controlled by a power supply and a computer to collect a target object for the four-eye camera group to perform light field compensation; the frequency multiplication synchronizer is a circuit which is arranged in the four-eye camera set and has the output signal frequency equal to the integral multiple of the input signal frequency.
3. The camera adjustment system according to claim 1, wherein the camera adjustment system comprises: the 5G communication module comprises a 5G communication chip, a memory interface and a radio frequency circuit, wherein the 5G communication chip is connected with a 5G high-speed data service chip; the memory interface is a pin which is provided with a memory interface on the sequential logic circuit and is connected with the 5G communication chip and the radio frequency circuit, and data transmission is carried out through the interface; the radio frequency circuit includes a transceiver for transceiving signals for wireless communication.
4. The camera adjustment system according to claim 1, wherein the camera adjustment system comprises: the image acquisition data preprocessing module comprises a CMOS sensing unit, an SRAM storage unit, an FPGA control unit and an image data processing unit, wherein the CMOS sensing unit is a sensor for acquiring images by a four-eye camera set, can convert optical signals of the acquired images into electric signals, and converts the acquired image data into digital data for capturing the images; the SRAM memory cell can support storing, updating and keeping image data information without a refreshing circuit in a power-on state; the FPGA control unit can process an image processing algorithm, edit a control code and perform simulation debugging, and processes data acquisition by using logic control; the image data processing unit is used for carrying out processes of denoising, image segmentation, enhancement, restoration and marginalization on the acquired image data.
5. The camera adjustment system according to claim 1, wherein the camera adjustment system comprises: the three-dimensional reconstruction calculation control module comprises a depth data acquisition unit, a data preprocessing unit, a point cloud registration, fusion, positioning and attitude determination unit, a texture mapping and surface generation unit and an AI algorithm control unit, wherein the depth data acquisition unit acquires depth information of an object in an image by virtue of pixels for light source transformation sensing, measuring light emission return time and triangulation distance measurement of infrared rays; the data preprocessing is to establish the position information of the real target object obtained from the depth data obtaining unit into a mathematical model conforming to the logical expression of a computer; the point cloud registration, fusion, positioning and attitude determination unit is used for recording three-dimensional coordinate points in a model after data preprocessing through coordinate precision, spatial resolution and a surface normal vector, and the recorded image is data information of positioning and attitude determination; the texture mapping generation surface unit generates an image surface by a texture mapping technology of image processing after the steps of point cloud calculation, registration and fusion in the point cloud registration, fusion, positioning and attitude determination unit are completed; the AI algorithm control unit calculates four coordinate coefficient values of the image of the point cloud data by the depth map, and controls the calculation power by editing the AI algorithm in the FPGA and calculating the image coordinate data.
6. The camera adjustment system according to claim 1, wherein the camera adjustment system comprises: the stereoscopic vision calibration module is used for calibrating a target object to convert from a world coordinate system to an image coordinate system by a four-eye camera group, and comprises a distortion coefficient, an external reference calibration unit, an internal reference calibration unit, a target checkerboard and a calculation calibration position coordinate unit, wherein the distortion coefficient is distortion caused by the deviation of the position from light transmitted by a lens of the four-eye camera group to an imaging plane, numerical values among the four coordinate systems are calculated through an AI algorithm, a distortion quantity is calculated, and a radial tangential distortion model is established to calculate the distortion coefficient; the external reference calibration is a parameter of a calibration model of the four-eye camera set and comprises a rotation matrix and a translation vector; the internal reference calibration is the conversion from three-dimensional points of the image to two-dimensional point coordinates; the target checkerboard is a checkerboard file with an input specified format, internal and external parameter calibration is carried out through computer calculation for extracting angular points and target images, the position and the pose of the target images relative to the four-eye camera set are obtained, and position and pose coordinate point data of a target object are calculated; the unit for calculating the coordinate of the calibration position calculates the data of the coordinate point of the calibration position according to the existing four coordinate system formulas.
7. The camera adjustment system according to claim 1, wherein the camera adjustment system comprises: the AI adjusting stereoscopic vision module comprises an intelligent identification target unit, a target object tracking unit, an AI detection obstacle unit, a camera shadow and pose adjusting unit and an AI visual deep learning unit, wherein the intelligent identification target unit is used for reading reconstructed image information data through a computer to extract characteristic quantity for processing, analyzing and understanding, and the deep learning algorithm is used for image identification; the tracking target object unit is used for locking an independent target object by scanning the edge contour, the shortest distance measurement, the concave-convex surface characteristic and the central position point of the target object through the four-eye camera set, when the target object moves, the four-eye camera set is used for tracking through an artificial intelligent tracking technology, and when the target object is dispersed, the full-coverage type of the four-eye camera set is used for recording the multi-target running track for tracking, so that the tracking effect is achieved; the AI obstacle detection unit detects obstacles for tracking the target object and detects the state of the adjustment process of the four-eye camera set in the state of eliminating the dynamic and static obstacles of the target object; the camera shadow and pose adjusting unit irradiates a target object through the shape and the intensity of the shadow of the illuminating lamp group, compares the coordinate system result calculated by the calibration image by adopting the position and the angle of the movable four-eye camera group, and judges and adjusts the stereoscopic vision effect; the AI visual deep learning unit is used for adjusting the stereoscopic vision of the image by labeling the image data and training the neural network according to the image data collected by the four-eye camera set and the trained neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211307924.7A CN115375890A (en) | 2022-10-25 | 2022-10-25 | Based on four mesh stereovision cameras governing system of 5G |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211307924.7A CN115375890A (en) | 2022-10-25 | 2022-10-25 | Based on four mesh stereovision cameras governing system of 5G |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115375890A true CN115375890A (en) | 2022-11-22 |
Family
ID=84073467
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211307924.7A Pending CN115375890A (en) | 2022-10-25 | 2022-10-25 | Based on four mesh stereovision cameras governing system of 5G |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115375890A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104897062A (en) * | 2015-06-26 | 2015-09-09 | 北方工业大学 | Visual measurement method and device for shape and position deviation of part non-coplanar parallel holes |
WO2021184218A1 (en) * | 2020-03-17 | 2021-09-23 | 华为技术有限公司 | Relative pose calibration method and related apparatus |
CN114995450A (en) * | 2022-06-21 | 2022-09-02 | 上海托旺数据科技有限公司 | Intelligent navigation method and system for blind people by using multi-eye stereoscopic vision |
CN115035162A (en) * | 2022-06-14 | 2022-09-09 | 北京邮电大学 | Monitoring video personnel positioning and tracking method and system based on visual slam |
-
2022
- 2022-10-25 CN CN202211307924.7A patent/CN115375890A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104897062A (en) * | 2015-06-26 | 2015-09-09 | 北方工业大学 | Visual measurement method and device for shape and position deviation of part non-coplanar parallel holes |
WO2021184218A1 (en) * | 2020-03-17 | 2021-09-23 | 华为技术有限公司 | Relative pose calibration method and related apparatus |
CN115035162A (en) * | 2022-06-14 | 2022-09-09 | 北京邮电大学 | Monitoring video personnel positioning and tracking method and system based on visual slam |
CN114995450A (en) * | 2022-06-21 | 2022-09-02 | 上海托旺数据科技有限公司 | Intelligent navigation method and system for blind people by using multi-eye stereoscopic vision |
Non-Patent Citations (1)
Title |
---|
罗庆生 等: "《狭小空间移动机器人焊缝跟踪技术》" * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109615652B (en) | Depth information acquisition method and device | |
US20180189565A1 (en) | Mapping a space using a multi-directional camera | |
CN110799918A (en) | Method, apparatus and computer program for a vehicle | |
CN110136208A (en) | A kind of the joint automatic calibration method and device of Visual Servoing System | |
CN110689008A (en) | Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction | |
CN111260773B (en) | Three-dimensional reconstruction method, detection method and detection system for small obstacle | |
CN103198477B (en) | Apple fruitlet bagging robot visual positioning method | |
CN102997891B (en) | Device and method for measuring scene depth | |
CN112801074B (en) | Depth map estimation method based on traffic camera | |
CN107560592B (en) | Precise distance measurement method for photoelectric tracker linkage target | |
CN106774296A (en) | A kind of disorder detection method based on laser radar and ccd video camera information fusion | |
CN112509125A (en) | Three-dimensional reconstruction method based on artificial markers and stereoscopic vision | |
CN105741379A (en) | Method for panoramic inspection on substation | |
CN110889829A (en) | Monocular distance measurement method based on fisheye lens | |
CN113327296B (en) | Laser radar and camera online combined calibration method based on depth weighting | |
CN114332689A (en) | Citrus identification and positioning method, device, equipment and storage medium | |
CN114761997A (en) | Target detection method, terminal device and medium | |
CN115359130A (en) | Radar and camera combined calibration method and device, electronic equipment and storage medium | |
CN111325828A (en) | Three-dimensional face acquisition method and device based on three-eye camera | |
Mi et al. | A vision-based displacement measurement system for foundation pit | |
CN113643436B (en) | Depth data splicing and fusion method and device | |
CN114812558A (en) | Monocular vision unmanned aerial vehicle autonomous positioning method combined with laser ranging | |
CN117173601B (en) | Photovoltaic power station array hot spot identification method and system | |
CN113608234A (en) | City data acquisition system | |
CN114140527A (en) | Dynamic environment binocular vision SLAM method based on semantic segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221122 |
|
RJ01 | Rejection of invention patent application after publication |