CN109522951A - A kind of method of environment and the multidimensional information Data acquisition and storage of target - Google Patents

A kind of method of environment and the multidimensional information Data acquisition and storage of target Download PDF

Info

Publication number
CN109522951A
CN109522951A CN201811335547.1A CN201811335547A CN109522951A CN 109522951 A CN109522951 A CN 109522951A CN 201811335547 A CN201811335547 A CN 201811335547A CN 109522951 A CN109522951 A CN 109522951A
Authority
CN
China
Prior art keywords
data
target
camera
pixel
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811335547.1A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhenfu Intelligent Technology Co ltd
Original Assignee
Shanghai Wisdom Tong Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wisdom Tong Technology Co Ltd filed Critical Shanghai Wisdom Tong Technology Co Ltd
Priority to CN201811335547.1A priority Critical patent/CN109522951A/en
Publication of CN109522951A publication Critical patent/CN109522951A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of methods of environment and the multidimensional information Data acquisition and storage of target, data acquisition is carried out using the multisensor syste depth integration comprising camera, based on image pixel, multiple sensors carry out unified alignment of data and merger according to the spatial sampling projection model that optical camera captures image to target data collected in respectively perception dimension, target is mapped to same target by the data of other sensor samples to be captured on the corresponding imaging pixel position after imaging by camera, by all kinds of acquisition data be combined to they by camera capture image pixel data together, for multidimensional information Data acquisition and storage.The camera that this method is suitable for various parameters is acquired and is stored with the environment of other sensors merged under scene and target data, there is provided unified multi-dimensional data acquisition and preservation model, acquisition has unified coordinate reference frame in time and Spatial Dimension with the sample data saved.

Description

A kind of method of environment and the multidimensional information Data acquisition and storage of target
Technical field
The present invention relates to a kind of method of environment and the multidimensional information Data acquisition and storage of target, using including camera Multisensor syste depth integration carry out Data acquisition and storage, use a kind of various dimensions based on based on image pixel Measured parameter data tissue captures the spatial sampling projective module of image to the data from different perception domains according to optical camera Type carries out unified alignment of data and merger, the acquisition and storage for data;This method is suitable for the camera shooting of various parameters Head with other sensors merge under scene Data acquisition and storage (comprising offer machine learning positive and negative sample data and The acquisition and preservation of semantic sample data), the multidimensional degree for efficiently saving system environments perception with target detecting can be unified According to the relative data amount and relevant cost for reducing data acquisition, storing and transmitting.
Background technique
In target identification and environment sensing field, Target Recognition Algorithms need relatively complete data acquisition and preservation to be used for Task processing also needs a large amount of positive negative sample for learning and training if using machine learning;Meanwhile in target identification In the process, a large amount of intermediate data can be generated to use for processing unit processes, and there may be " cloud meter during target identification The participation of calculation " (remote processing unit) needs efficient data acquisition, Techniques of preserving.
Being usually used in target identification and the sensor of environment sensing at present has: camera, microwave radar, and infrared sensor surpasses Sound radar and laser radar etc., they be widely used in vehicle drive assist system (ADAS) and automated driving system, In robot, automatic guided vehicle (AGV) and the various equipment and system for needing environment sensing and target detection capabilities.
Camera can perceive the texture (shape, profile, illumination light and shade etc.) and color of target, record flashy image Information.Camera with time shaft can string together the event of record to form video flowing, can be used for event with recorded video information Playback and the event analysis of association in time.Infrared sensor (infrared camera) is one kind of camera, can capture target Infra-red radiation information is simultaneously saved with the format of picture and video.Microwave radar can capture the relative distance of target, opposite fortune Dynamic speed, the radar cross section RCS data of target, and with thermal map, the relative distance of object, speed of related movement, target Radar cross section RCS data dimension quantitative expression (Radar Object Data Output) or point cloud data it is defeated Out.Laser radar then mainly exports mesh by the spatial position (relative distance, space angle location coordinate information) of detection target Target point cloud data.
Various sensors have an information Perception dimension of oneself, such as our common cameras, it can capture mesh Target image information, the environment of lifelike records photographing that time and the texture and color information of target, but we can Distance, the velocity information of target can not can be accurately extracted from single picture, we are also difficult to come from a traditional photo pre- What will can occur for survey event lower a moment.With the mode of video, (essence of video is that photo each is clapped for we Take the photograph the picture series that the time shaft of moment is together in series according still further to time shaft playback) record and analyze event, but video Mode brings the data of flood tide, and the therefore demand in the space of bring large transmission bandwidth and storing data.Other sensors Recording mode, such as radar, ultrasonic wave and laser radar etc. can record the letter of respective sensor itself perception dimension Breath, for example, the distance of target, speed;The data information and current recording mode (data structure) that they record are for me Comprehensive description record target signature for needing and be used directly for environment sensing and event prediction, the dimension of data and complete Degree is insufficient.How the data information that these different types of sensors acquire is organized on unified time and space axis And it efficiently preserves, not general method in the past.We need a kind of effective data organization method to be by these System does efficient multi-layer data to various data collected and merges, and the information from different dimensions is made full use of to combine to support More various effective intelligence sample and preservation, are analyzed for target's feature-extraction and data.
The task of the present invention is a kind of methods for the multidimensional information Data acquisition and storage for providing environment and target, utilize packet Multisensor syste depth integration containing camera carries out Data acquisition and storage, camera in system and other variety classes Sensor capture the data that detect and carry out unified alignment and merger, acquired for data sample, the storage of data;It is this Method can provide unified multi-dimensional data acquisition and (acquire and save with preservation model for target identification and context aware systems Sample data have unified coordinate reference frame in time dimension and Spatial Dimension), and be suitable for semantic sample acquisition and It saves.
Summary of the invention
The data that we acquire sensor different types of in system are using the two-dimensional pixel structure of camera as data Organize the basis of (alignment).Data organization is that according to certain mode and rule data are carried out with the mistake of merger, storage, processing Journey, in the present invention, the data that we acquire sensor different types of in system are according to " various dimensions measurement parameter matrix " Method carry out data processing, by they according to optical camera capture image spatial sampling projection model do unified alignment With merger, target by the data of other sensor samples be mapped to same target by camera capture imaging after it is corresponding at As pixel position on combine with image data, along with timestamp (time of record sampling), for data acquisition with Storage.Camera can be the common camera of visible light, be also possible to the infrared camera of thermal imaging, and the latter extends and is The scope of application of system, for example, no visible illumination environment and bad weather circumstances (dense fog, heavy rain) common camera can not Working range under the normal scene for capturing target.
We combine the detection data of different dimensions, on the basis of the two-dimentional Image space of camera imaging, We extend the information that each pixel includes, the brightness for including in addition to its script with (or) colouring information, we are also Each pixel increases longitudinal dimension, inputs the pixel at camera space exploration (object space) in increased longitudinal dimension A variety of corresponding dimensions that the target object of mapping is detected by other sensors information (such as relative distance, speed of related movement, The radar cross section RCS data of target, the data such as thermal radiation temperature distribution of target) and the processing information that calculates of system (such as target optical flow data) is assembled to originally various dimensions information using image pixel as the target object of unit in a hierarchical fashion On description, the matrix array of unified structure can be mathematically shown.Based on each pixel of camera imaging The corresponding data that other sensors capture is increased in dimension, increases system senses depth, and establishing with camera pixel is The stereo multi-dimensinal depth perception matrix array of granularity, each pixel originally become each various dimensions class " pixel ", I Realized with such data organization method better data sampling and storage.
The matrix array of " various dimensions measurement parameter " describes schematic diagram as shown in Figure of description.
The cross section coordinate pair of various dimensions class " pixel " is equal to the pixel coordinate of camera shooting head portrait plane, because of various dimensions class " pixel " is that the Pixel Information based on camera shooting head portrait plane has done Longitudinal Extension, and each pixel increases the letter of multiple longitudinal dimensions Breath combination;We camera pixel planes coordinate position be (x, y) pixel (both " pixel (x, y) ") expand The individual of various dimensions class " pixel " be referred to as are as follows: various dimensions class " pixel " (x, y).
The sequencing relationship of the data matrix (layer) of various dimensions class " pixel " array longitudinal direction can flexibly change, than Such as, gray scale " Y " data matrix (layer) can be placed on the foremost of matrix longitudinal layer, can also be put into intermediate or back, as long as Predefined is well in the System describe of data structure;The number of plies (both types of data combination) of data can also basis It needs to increase or reduce, still, the two-dimensional pixel structure of image the principle of array organization: is captured as data using camera The basis for organizing (alignment), other detection dimensional informations that pixel (x, y) same target mapped is captured by other sensors It is assembled on image pixel (x, y) data that it is captured by camera in a hierarchical fashion, the pixel of image as each The combination foundation (target object description) of data cell, can mathematically be described as the matrix array of unified structure, this Principle is the core of the method for the present invention.
The process of " various dimensions measurement parameter " matrix array data organization is as follows: the mode that we use geometric space to convert In the three-dimensional Euclidean solid geometry space (space coordinates are labeled as X/Y/Z axis) of standard the detection domain (data of each sensor Detect the physical space region of acquisition) it associates, establish one-to-one relationship.Since the spatial position of each sensor installation can Can be different, the spatial parameter that we are installed by sensor is in the respective detection domain of each sensor (respective space exploration) Mandrel line (X '/Y '/Z ' axis) calculates, and then by translation, rotation and the scaling to respective coordinate system, they are uniformly arrived In three-dimensional detection space (true field) coordinate of camera, it is established that the unified space exploration of system (detection domain) and common Detection visual angle.Finally, according to other sensor detections such as radar, infrared thermal imager domain the two of camera imaging face correspondence The mapping relations (object space of the two-dimentional object plane in camera) established on dimension object plane, the target that they are detected is reflected according to this It penetrates relationship to dock with the foundation of each pixel of camera imaging, the respective search coverage of each sensor in the uniform areas of system One-to-one relationship is set up, then assignment is corresponding into various dimensions measurement parameter matrix correspondingly by target acquisition data On the position image pixel (x, y), data combination is done.Multi-sensor detection domain establish corresponding relationship the result is that: to from difference The data for perceiving domain carry out unified alignment of data and merger according to the spatial sampling projection model of optical camera, same mesh It marks the other detection dimensional informations captured by other sensors and is assembled to the image slices that it is captured by camera in a hierarchical fashion Prime number is on.
We can also close the detection domain (the physical space region of data detection acquisition) of each sensor with spherical coordinate system Connection gets up, and establishes one-to-one relationship.With spherical coordinate system coordinate (ρ,θ) three-dimensional variable, wherein ρ is detected target distance The distance in coordinate system axle center,It is angle of the target with respect to z-axis, θ is angle of the target with respect to x-axis.Its we tangible system In, the data mapping relations of various dimensions class " pixel " are exactly that target corresponds to spherical coordinates origin (both camera entrance pupil positions) in radial direction Space reflection relationship on direction.
European solid geometry space coordinate can be converted mutually with spherical coordinates, and relationship is as follows:
X=ρ sin φ cos θ
Y=ρ sin φ sin θ
Z=ρ cos φ
Since the combination of our various dimensions class " pixel " is that " each pixel increases multiple longitudinal dimensions, increased Target object unit that the pixel is mapped in camera space exploration (object space) is inputted in longitudinal dimension by other sensors The information of the correspondence dimension detected ", so it is empty that the space exploration of various other sensors is uniformly mapped to camera detection Between it is upper consistent with its optical axis alignment.
The spatial resolution of each sensor and camera may be different when due to initial input, when data assembly I Can use the matched method of data resolution, data assembly when using interpolation method solve resolution ratio matching ask Topic, alternatively, we define macro block (Macroblock) in high-resolution pixel planes, it is big one by one by the way that picture to be divided into After the small block pre-defined, then with Low Resolution Sensor detection data carry out correspond mapping relations data Match.Certainly, macro block be specifically defined parameter need we illustrate in data organization (such as data file head or explanation in It indicates).
In some scenes, need to do the movement velocity of target accurate record, system requirements increase various dimensions class " as Element " in array " speed dimension " precision, with vector velocity rather than relative velocity is stated, in this case, Wo Menke With using system extrapolate target vector speed data three-dimensional space X/Y/Z axis (or corresponding spherical coordinate system (ρ, θ) coordinate) each component, " speed dimension " in various dimensions class " pixel " array is then increased to longitudinal 3 layers, and (every layer is deposited Store up speed data X/Y/Z axis or spherical coordinate system (ρ,θ) respective corresponding numerical value).In this case, our various dimensions class The vector value of target speed is had recorded in " pixel " data organizational structure.
Similarly, other processing information (such as optical flow data) that we can also extrapolate system are in the increased multidimensional of correspondence It is mapped in various dimensions class " pixel " data organizational structure and goes in the longitudinal dimension of degree class " pixel ", the multidimensional number as systematic sampling It acquires and saves according to other dimension datas.
In the present invention, the data that we acquire sensor different types of in system according to various dimensions class " pixel " into They are done unified alignment and merger according to the spatial sampling projection model that optical camera captures image by row data organization, Target is mapped to target by the data of other sensor samples and is captured the corresponding imaging pixel position after imaging by camera On combined with image data, the sampling, storage along with timestamp the time of sampling (record), for data.Of the invention Data organization method provides a kind of side based on data structure description record target signature and environment sensing based on pixel Method can be combined the form of the target acquisition data Multi-layer matrix array (a similar stereoscopic matrix) of multiple dimensions one It rises, so that data form multifaceted data fusion, the information combination from different dimensions generates more various effective data and digs Pick and feature extraction potentiality, we provide not only a kind of more efficient event recording method (format), more effectively improve system Environment sensing of uniting and target detection capabilities, the space of bandwidth and storing data needed for transmission data can also be greatlyd save, side Just effective and sufficient data stream transmitting is used to different system, at the same reduce system (processing unit) do event prediction with The requirement of the data volume and real-time operation processing capacity of analysis, is effectively reduced the cost of context aware systems.
Since this method contains target relative distance and velocity information in various dimensions class " pixel " structure, we are used Method we the intention of target can be just made (only with various dimensions class " pixel " matrix data structure) in a frame data retouch It states and scene analysis that predicted events will occur.
Detailed description of the invention
Fig. 1 is the description schematic diagram of " various dimensions measurement parameter " matrix array structure.
Specific implementation method
In the combination of Multi-sensor Fusion, the most commonly used is the combination using camera and microwave radar.Camera It exports color image (RGB or yuv data), distance, relative velocity, azimuth and the mesh of microwave radar output detection target Target radar cross section RCS (Radar Cross-Section) data, we camera collected data, according to RGB color (can sequentially exchange) three-layer arrangement, it is assumed that every layer picture number size (both resolution ratio of camera head) be X*Y (such as: 1920*1080- corresponds to the camera of 1080P resolution ratio);If initial data input is yuv format, can also be according to YUV tri- Layer arrangement, but it is proposed that it is preferably converted into RGB data structure (YUV turns RGB), because each layer data can be reduced in this way Association, is conducive to subsequent independent feature extraction.Our data structure (size are as follows: X*Y*3) conducts this three-layer stereo " the initial data input layer of camera ", then, according to the knot of various dimensions class pixel on the initial data input layer of camera The multidimensional data of the hierarchal arrangement of microwave radar acquisition is added in structure.
We are that the data of other sensors are captured the pixel of image with camera to do correspondence mappings matching.Camera shooting The data organization of each pixel can be tri- layers of point RGB or YUV to state, and data can also be encapsulated into one layer, with more Position one unit of (24bit, 32bit are even higher) data cell states RBG the YUV combined value an of pixel;Or We directly be achromatic camera (such as infrared camera) directly capture be image gray value.Either how Kind situation, the data and camera of other sensors are captured the pixel of image to do correspondence mappings matching, same by we Other detection dimensional informations that target is captured by other sensors are assembled to the image that it is captured by camera in a hierarchical fashion In pixel, the pixel of image as the combination foundation of each data cell, data are to correspond to be mapped to what pixel got on.
For the data of radar acquisition if directly matching with camera pixel data, radar data may be excessively sparse, If to carry out directly point-by-point matching according to pixel relationship with camera image data, needs first to process, radar data is converted At the intensive class image data for having tensor structure.We devise following method the data of radar are defeated in the present invention Enter into our system " stereo multi-dimensinal depth perception matrix ": 1) in the way of geometric projection, the space exploration of radar With reference to camera imaging pattern correspond to spherical coordinates origin (both camera entrance pupil positions) establish in radial directions space projection The space projection relationship of relationship, all target being detected foundation based on camera imaging model throws their data It is mapped in their one-to-one image pixel positions and (projects on the corresponding 2 dimension object plane in camera imaging face, as radar The two-dimensional map face of target);Its two-dimensional space resolution ratio is equal to the pixel that camera matching in system captures image Resolution ratio establishes the data of radar and the one-to-one correspondence mapping relations that camera data are point-by-point;The target data that radar detection is arrived (as) is mapped on the two-dimensional map face of radar target, generates " radar perception submatrix ";In matrix layer (depth), radar By following " input of radar initial data ", classify the data (layer) of perception submatrix compound form: L (target range value) Layer, S (relative velocity) layer, R (radar cross section value) layer;Equally, the sequence of these layers can interact, can also be flexible Combination (L, S, R are all activated), either only selects one such (L or S or R) or wherein 2 kinds (L+S, S+R etc.).Mesh The spatial resolution of preceding millimetre-wave radar is relatively low, and the angular resolution of target is not high to cause its numerical projection to reflect in the two dimension of target It penetrates on face and has bigger " possible area coverage ", capture figure than camera similar to the original pixels particle size of radar The Pixel Dimensions of picture are big, and resolution ratio is low, in order to which each layer data matrix of each various dimensions class " pixel " is corresponded to assignment, We need to " radar two-dimensional map face " interpolation, and improving its resolution ratio makes it capture the pixel matching of image with camera, Then to each various dimensions " pixel " assignment one by one.Common interpolation method, such as: arest neighbors interpolation, bilinearity are inserted Value, cubic convolution method etc., can use.2) due to radar data be it is sparse, in data structure (the matrix data layer of radar L, S, R etc.) it is inner, there are the data for the band radar for detecting target assignment can go over correspondingly in radar.But do not having Mesh target area is detected, our initial data assignment the corresponding radar in these regions are as follows: " 0 ", alternatively, for according to prior The default values of the representative background of setting.In this way, each of radar data matrix matrix unit has assignment.Another Way is exactly that we in high-resolution pixel planes define macro block (Macroblock), by separating the picture into size one by one Carry out the Data Matching of correspondence mappings relationship, macro block after the block pre-defined with the data of Low Resolution Sensor detection again Be specifically defined parameter we be pre-set in data organization (in data file or in other related description documents It indicates).
We can also be combined to the intermediate data of sensor processing inside various dimensions class " pixel " and go.For example, camera shooting The optical flow data of head.Light stream expresses the variation of image, can observed person's use since it contains the information of target movement To determine the motion conditions of target.Light stream is the parameter handled with the pixel relationship derivation between camera successive image frame, can To be two-dimensional vector (X, Y-direction).With same principle, we can be the corresponding light stream number with front and continued frame of current pixel According to being added in " various dimensions measurement parameter " data matrix, correspondingly increase by one " light stream submatrix ", to the data group of system It knits and increases more data dimensions, handled for subsequent aggregation of data.
Equally, if system needs to do accurate record to the movement velocity of target, with vector velocity rather than relative velocity Data are sampled and save, we first extrapolate (alignment of having time dimension and record) mesh that target is collected (shooting) moment Vector velocity data are marked at each point of three-dimensional space X/Y/Z axis (or corresponding spherical coordinate system (ρ, φ, θ) coordinate) Amount, " speed dimension " in Multidimensional numerical matrix is then increased to longitudinal 3 layers, and (every layer of storage target velocity is in X/Y/Z axis or ball The respective corresponding numerical value of coordinate system (ρ, φ, θ)), then with the mapped sample principle of various dimensions class " pixel " 3 layers of vector velocity number According to being added on each pixel unit.So, target fortune is had recorded in our various dimensions class " pixel " array The vector value of dynamic speed.
If increasing other sensors in system, such as infrared thermal imager or laser radar, we sense these The target data that device the captures identical method of radar complex above, the detection that same target is captured by other sensors are believed Breath is assembled in a hierarchical fashion on the image pixel that it is captured by camera, the pixel of image as each data cell Combination foundation carries out data organization, produces various dimensions class " pixel " data matrix.Equally, the data different for spatial resolution Collection, we can match using data interpolating or by the way of being defined as macro block, be finally completed the organizer of data mapping Formula.
In the combination of Multi-sensor Fusion, there are also a kind of efficient combinations: adding microwave radar and red using camera The combination of outer camera (thermal imaging system).The addition of infrared camera data can increase the data of target heat radiation dimension, expand Big and enhancing system senses ability.If infrared camera captures the pixel quantity (image resolution ratio) of image with common camera shooting If head mismatches, the pixel that infrared camera captures image is first done interpolation by we, be allowed to as various dimensions class " as Then the common camera pixel matching of element " benchmark captures them with common camera the sky of image by coordinate transform again Between sampling projection Unified Model alignment and merger, target is mapped to the target by the data that infrared camera samples and is imaged Head captures on the corresponding imaging pixel position of imaging, forms multi-dimensional data structure.
If adding the combination of laser radar in our systems, what is usually exported due to laser radar is point cloud data knot Structure: gray-scale pixels point cloud data (X1, Y1, Z1, gray value 1) or colour point clouds data (X1, Y1, Z1, r1, g1, b1), together Sample we the data of this 4 layers or 6 layers be combined to this target point according to optical camera capture image spatial sampling throw It penetrates model to be mapped on the position camera pixel (x, y), is combined into various dimensions class " pixel " (x, y).
In the present invention, we describe record target signature and environmental information using various dimensions class " pixel " matrix structure, Along with timestamp (time of record sampling), this data organization method comes multisensor syste from multiple detection dimensions It acquires abundant and unified information and saves them;Our unified data groups of target acquisition data multiple dimensions It knits structure to combine, generates a multi-dimensional matrix array, each various dimensions class of the matrix array of this data " as Element " has all organically combined the detection information of various dimensions perception, and uniform alignment, in a unit, such structure is to subsequent Data processing and machine learning (no matter the method for classifier or the method for neural network added using traditional characteristic value, or The combination that person both is) and the sampling of training sample all bring huge convenience.It is understood that machine learning needs flood tide Sample set, these sample sets also need to be the number of the invention for itself customized acquisition of multi sensor combination system According to method for organizing, effective data targetedly very can be enriched for specific combination multisensor syste acquisition various dimensions Sample, the sample set (positive and negative samples collection and semantic sample set) for being very suitable to multi-sensor fusion system are collected, and substantially Saving memory space (such as: if previous video sample includes velocity information, system acquisition video clip is needed, is adopted in this way Collection, storage, transmission and processing all bring difficulty, and the data of one frame of data organization method of the invention just can record and save this A little information).When machine learning system is related to " cloud "+" end " or " edge calculations ", need systematic sampling data in local With the transmission in cloud, data sampling of the invention may be implemented more efficiently transmission various dimensions information with storage method and avoid simultaneously Unnecessary redundant data occupies data transfer bandwidth;In addition, in some specific areas (such as security protection, monitoring, insurance evidence obtaining) Application, system requirements saves information as much as possible with memory space as few as possible, data sampling of the invention with deposit Method for storing the target information of various dimensions can be stored on a frame data matrix (may include target range and speed of service vector this A little information), the preservation efficiency of forensic information can be substantially improved.

Claims (6)

1. this is a kind of method of the multidimensional information Data acquisition and storage of environment and target, more sensings comprising camera are utilized Device system depth is integrated into row Data acquisition and storage, it is characterised in that: multiple sensors are in respectively perception dimension to target institute The data of acquisition carry out unified alignment of data and merger according to the spatial sampling projection model that optical camera captures image, do Multifaceted data fusion is superimposed all kinds of of other sensor samples based on based on the pixel of camera capture image to combine Data, the multidimensional information Data acquisition and storage for environment and target.
2. the method for multidimensional information Data acquisition and storage according to claim 1, we are different types of in system The data of sensor acquisition do unified alignment and merger according to the spatial sampling projection model that optical camera captures image, Be characterized in: target by the data of other sensor samples be mapped to same target by camera capture imaging after it is corresponding at As pixel position on, target by the Various types of data of other sensor samples be combined to it by camera capture image pixel Data together, are used for multidimensional information Data acquisition and storage.
3. the method for multidimensional information Data acquisition and storage according to claim 1, we can also be added sampling when Between information, it is characterized in that: system by multidimensional information data combine after, along with timestamp record sampling time, be used for environment With the multidimensional information Data acquisition and storage of target.
4. the method for multidimensional information Data acquisition and storage according to claim 1, each sensor when due to initial input Spatial resolution and camera may be different, when data assembly we can using the matched method of data resolution come It solves the problems, such as, it is characterized in that: we are right at the intensive class image data for having tensor structure the data conversion of low resolution The matching problem of itself and camera image resolution ratio is solved by the way of interpolation afterwards.
5. the method for multidimensional information Data acquisition and storage according to claim 1, each sensor when due to initial input Spatial resolution and camera may be different, we can also be using another data resolution when data assembly The method matched, both the method for macro block solved the problems, such as, it is characterized in that: we, which are used in high-resolution pixel planes, defines macro block (Macroblock) mode is passed with low resolution again by the way that picture to be divided into after the block that size pre-defines one by one Sensor detection data carry out correspond mapping relations Data Matching combination, macro block be specifically defined parameter we in data Illustrate in tissue, for example is indicated in data file head or explanation.
6. the method for multidimensional information Data acquisition and storage according to claim 1, we can be led sensor processing Intermediate processing data out is combined to inside our multi-dimensional data institutional frameworks based on pixel and goes, it is characterized in that: handle The data such as the vector velocity (3 n dimensional vector n speed) that process asks derived data such as light stream or system to extrapolate are mapped to it Corresponding target is captured by camera on the position of the corresponding imaging pixel after imaging, and composition multidimensional data structure is used for multidimensional Information data acquisition and storage.
CN201811335547.1A 2018-11-09 2018-11-09 A kind of method of environment and the multidimensional information Data acquisition and storage of target Pending CN109522951A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811335547.1A CN109522951A (en) 2018-11-09 2018-11-09 A kind of method of environment and the multidimensional information Data acquisition and storage of target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811335547.1A CN109522951A (en) 2018-11-09 2018-11-09 A kind of method of environment and the multidimensional information Data acquisition and storage of target

Publications (1)

Publication Number Publication Date
CN109522951A true CN109522951A (en) 2019-03-26

Family

ID=65776370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811335547.1A Pending CN109522951A (en) 2018-11-09 2018-11-09 A kind of method of environment and the multidimensional information Data acquisition and storage of target

Country Status (1)

Country Link
CN (1) CN109522951A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110765A (en) * 2019-04-23 2019-08-09 四川九洲电器集团有限责任公司 A kind of multisource data fusion target identification method based on deep learning
CN110188108A (en) * 2019-06-10 2019-08-30 北京平凯星辰科技发展有限公司 Date storage method, device, system, computer equipment and storage medium
CN111402428A (en) * 2020-03-23 2020-07-10 青岛大学 Underground pipeline exploration method based on ARGIS
CN112464853A (en) * 2020-12-09 2021-03-09 辽宁省视讯技术研究有限公司 Scene analysis system based on multidata input
CN113286311A (en) * 2021-04-29 2021-08-20 沈阳工业大学 Distributed perimeter security protection environment sensing system based on multi-sensor fusion
CN113343962A (en) * 2021-08-09 2021-09-03 山东华力机电有限公司 Visual perception-based multi-AGV trolley working area maximization implementation method
EP3779867A4 (en) * 2018-03-29 2021-12-29 Shanghai Zhenfu Intelligent Tech Co Ltd. Data processing method and device based on multi-sensor fusion, and multi-sensor fusion method
CN113925475A (en) * 2021-10-16 2022-01-14 谢俊 Non-contact human health monitoring device and method
CN116229375A (en) * 2023-05-06 2023-06-06 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060007308A1 (en) * 2004-07-12 2006-01-12 Ide Curtis E Environmentally aware, intelligent surveillance device
CN104506869A (en) * 2015-01-12 2015-04-08 深圳市江机实业有限公司 Method for motion estimation of video sequences based on block matching under different resolutions
CN106101590A (en) * 2016-06-23 2016-11-09 上海无线电设备研究所 The detection of radar video complex data and processing system and detection and processing method
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060007308A1 (en) * 2004-07-12 2006-01-12 Ide Curtis E Environmentally aware, intelligent surveillance device
CN104506869A (en) * 2015-01-12 2015-04-08 深圳市江机实业有限公司 Method for motion estimation of video sequences based on block matching under different resolutions
CN106101590A (en) * 2016-06-23 2016-11-09 上海无线电设备研究所 The detection of radar video complex data and processing system and detection and processing method
CN108663677A (en) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 A kind of method that multisensor depth integration improves target detection capabilities

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3779867A4 (en) * 2018-03-29 2021-12-29 Shanghai Zhenfu Intelligent Tech Co Ltd. Data processing method and device based on multi-sensor fusion, and multi-sensor fusion method
CN110110765A (en) * 2019-04-23 2019-08-09 四川九洲电器集团有限责任公司 A kind of multisource data fusion target identification method based on deep learning
CN110188108A (en) * 2019-06-10 2019-08-30 北京平凯星辰科技发展有限公司 Date storage method, device, system, computer equipment and storage medium
CN110188108B (en) * 2019-06-10 2021-03-02 北京平凯星辰科技发展有限公司 Data storage method, device, system, computer equipment and storage medium
CN111402428B (en) * 2020-03-23 2023-04-07 青岛大学 Underground pipeline exploration method based on ARGIS
CN111402428A (en) * 2020-03-23 2020-07-10 青岛大学 Underground pipeline exploration method based on ARGIS
CN112464853A (en) * 2020-12-09 2021-03-09 辽宁省视讯技术研究有限公司 Scene analysis system based on multidata input
CN112464853B (en) * 2020-12-09 2024-02-23 辽宁省视讯技术研究有限公司 Scene analysis system based on multiple data inputs
CN113286311A (en) * 2021-04-29 2021-08-20 沈阳工业大学 Distributed perimeter security protection environment sensing system based on multi-sensor fusion
CN113286311B (en) * 2021-04-29 2024-04-12 沈阳工业大学 Distributed perimeter security environment sensing system based on multi-sensor fusion
CN113343962A (en) * 2021-08-09 2021-09-03 山东华力机电有限公司 Visual perception-based multi-AGV trolley working area maximization implementation method
CN113343962B (en) * 2021-08-09 2021-10-29 山东华力机电有限公司 Visual perception-based multi-AGV trolley working area maximization implementation method
CN113925475A (en) * 2021-10-16 2022-01-14 谢俊 Non-contact human health monitoring device and method
CN116229375A (en) * 2023-05-06 2023-06-06 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator
CN116229375B (en) * 2023-05-06 2023-08-25 山东卫肤药业有限公司 Internal environment imaging method based on non-light source incubator

Similar Documents

Publication Publication Date Title
CN109522951A (en) A kind of method of environment and the multidimensional information Data acquisition and storage of target
CN109655825A (en) Data processing method, device and the multiple sensor integrated method of Multi-sensor Fusion
CN108053449A (en) Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system
CA3100569A1 (en) Ship identity recognition method base on fusion of ais data and video data
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
Chen et al. Indoor camera pose estimation via style‐transfer 3D models
CN104134208B (en) Using geometry feature from slightly to the infrared and visible light image registration method of essence
CN113920097B (en) Power equipment state detection method and system based on multi-source image
CN109544501A (en) A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching
CN113192182A (en) Multi-sensor-based live-action reconstruction method and system
CN114694011A (en) Fog penetrating target detection method and device based on multi-sensor fusion
CN111914615A (en) Fire-fighting area passability analysis system based on stereoscopic vision
Garg et al. Look no deeper: Recognizing places from opposing viewpoints under varying scene appearance using single-view depth estimation
CN110288623A (en) The data compression method of unmanned plane marine cage culture inspection image
CN116994135A (en) Ship target detection method based on vision and radar fusion
Zhang et al. As-built bim updating based on image processing and artificial intelligence
CN103489165A (en) Decimal lookup table generation method for video stitching
Morelli et al. Deep-image-matching: a toolbox for multiview image matching of complex scenarios
CN110345919A (en) Space junk detection method based on three-dimensional space vector and two-dimensional plane coordinate
CN115471782A (en) Unmanned ship-oriented infrared ship target detection method and device
Ikehata et al. Panoramic structure from motion via geometric relationship detection
Roshandel et al. Semantic segmentation of coastal zone on airborne LiDAR bathymetry point clouds
CN110930507A (en) Large-scene cross-border target tracking method and system based on three-dimensional geographic information
CN117726687B (en) Visual repositioning method integrating live-action three-dimension and video
Zhang et al. Intelligence information analysis based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201108 floor 3 and 4, No. 598, Guanghua Road, Minhang District, Shanghai

Applicant after: SHANGHAI ZTTVISION TECHNOLOGIES Co.,Ltd.

Address before: Room 3001, creative building, 1559 Zuchongzhi Road, Zhangjiang High Tech Park, Pudong New Area, Shanghai 201203

Applicant before: SHANGHAI ZTTVISION TECHNOLOGIES Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20211112

Address after: 201210 floor 3, building 1, No. 400, Fangchun Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Shanghai Zhenfu Intelligent Technology Co.,Ltd.

Address before: 201108 floors 3 and 4, No. 598, Guanghua Road, Minhang District, Shanghai

Applicant before: SHANGHAI ZTTVISION TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190326

WD01 Invention patent application deemed withdrawn after publication