CN116205795A - Environment sensing method and environment sensing device for rail transit - Google Patents

Environment sensing method and environment sensing device for rail transit Download PDF

Info

Publication number
CN116205795A
CN116205795A CN202111443634.0A CN202111443634A CN116205795A CN 116205795 A CN116205795 A CN 116205795A CN 202111443634 A CN202111443634 A CN 202111443634A CN 116205795 A CN116205795 A CN 116205795A
Authority
CN
China
Prior art keywords
image
image information
information
fused
continuous frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111443634.0A
Other languages
Chinese (zh)
Inventor
李宁
刘伟华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN202111443634.0A priority Critical patent/CN116205795A/en
Publication of CN116205795A publication Critical patent/CN116205795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for sensing environment of rail transit, wherein the method comprises the following steps: acquisition of continuous frame image information F n The method comprises the steps of carrying out a first treatment on the surface of the For the continuous frame image information F n Performing Gaussian filtering processing to obtain image information after primary processing; performing fusion operation on the primarily processed image information to obtain fused image information; performing reprocessing operation on the fused image information to obtain reprocessed image information; and determining environment perception information of the track traffic based on the reprocessed image information. The continuous frame images acquired in the track traffic process are subjected to data optimization processing and detail enhancement processing, so that the environment is effectively reducedAnd data analysis and processing capacity in the sensing process improve the environment sensing efficiency and effectively improve the accuracy of environment sensing.

Description

Environment sensing method and environment sensing device for rail transit
Technical Field
The invention relates to the technical field of information detection, in particular to an environment sensing method of rail transit and an environment sensing device of the rail transit.
Background
The environmental perception of the rail transit line is an important technical means for maintaining the running safety of rail vehicles. The existing rail transit line sensing methods have two kinds: the method comprises the steps that firstly, rail transit lines are perceived through devices beside a rail, such as infrared laser scanning, a camera beside the rail, and the like; the other is to sense the track traffic line based on vehicle-mounted equipment such as a laser radar, a camera and the like.
The environment sensing method based on the track side cameras can be widely applied to the field of track traffic as the method can directly improve the existing track traffic road monitoring system. In the existing environment sensing method based on the track side cameras, the sensing information of the track traffic route is generated by processing and analyzing the image of each frame.
However, in the practical application process, as the running speed of the rail traffic is higher, and along with the continuous development of technology, the speed of the rail traffic is continuously increased, and the video acquisition frequency is required to be continuously increased due to the high-speed running, on the one hand, if each image is processed, a large amount of redundant information exists, so that the operation amount is greatly increased, the waste of calculation force is caused, and meanwhile, the operation efficiency is greatly reduced; if the video acquisition frequency is reduced, the high-speed movement will cause the loss of key information, so that the situation of false detection or missing detection occurs, and the actual requirement cannot be met.
Disclosure of Invention
In order to overcome the technical problems in the prior art, the embodiment of the invention provides an environment sensing method for rail transit, which is used for carrying out data optimization processing and detail enhancement processing on continuous frame images acquired in the process of rail transit, so that the data analysis and processing amount in the environment sensing process are effectively reduced, the environment sensing efficiency is improved, and the accuracy of environment sensing is effectively improved.
In order to achieve the above object, an embodiment of the present invention provides an environment sensing method for rail transit, the method including: acquiring continuous frame image information; performing Gaussian filtering processing on the continuous frame image information to obtain primarily processed image information; performing fusion operation on the primarily processed image information to obtain fused image information; performing reprocessing operation on the fused image information to obtain reprocessed image information; and determining environment perception information of the track traffic based on the reprocessed image information.
Preferably, the method further comprises: after the continuous frame image information is acquired, acquiring a preset sampling rule; randomly sampling images in the continuous frame image information based on the preset sampling rule to obtain sampled images; and taking the sampled image as new continuous frame image information.
Preferably, the performing gaussian filtering processing on the continuous frame image information to obtain the image information after the initial processing includes: determining the continuous frame image information F n Gaussian parameter information P for each image in a plurality of images t The Gaussian parameter information P t Characterized by: p (P) t = { k, σ|k=2t_1, σ=t_0.5 }, where F n ={f t ,f t-1 ,…,f t-n },t∈T + ,f t Characterized as the current frame, t as the ordinal number of each image, n as F n The number of images in (a); based on the Gaussian parameter information P t Determining a corresponding gaussian distribution matrix G (x, y) characterized by
Figure BDA0003384399270000021
Based on the Gaussian distribution matrix G (x, y) and the continuous frame image information F n Generating the post-primary-processing image information G n The image information G after preliminary treatment n Characterized by->
Figure BDA0003384399270000022
Preferably, the performing a fusion operation on the pre-processed image information to obtain fused image information includes: determining a weighting coefficient w of each image in the pre-processed image information t The weighting coefficient w t Characterized by
Figure BDA0003384399270000023
Based on the weighting coefficient w t A weighted fusion operation is performed on the current frame,obtaining a weighted fused image d t The weighted and fused image d t Characterized as d t =∑g t *w t The method comprises the steps of carrying out a first treatment on the surface of the Based on the weighted fused image d t And generating fused image information.
Preferably, the performing a reprocessing operation on the fused image information to obtain reprocessed image information includes: acquiring standard deviation information M of each image in the fused image information s The method comprises the steps of carrying out a first treatment on the surface of the For the standard deviation information M s Performing normalization operation to obtain normalized information M norm The method comprises the steps of carrying out a first treatment on the surface of the The weighted and fused image d is subjected to weighting and fusion according to a preset rule t Processing to obtain reprocessed image d' t The reprocessed image d' t The sign is d' t =d t ·M norm +f t ·(1-M norm ) The method comprises the steps of carrying out a first treatment on the surface of the For the reprocessed image d' t Performing median filtering processing to obtain a filtered image; and generating reprocessed image information based on the filtered processed image.
Preferably, the standard deviation information M of each image in the fused image information is obtained s Comprising: performing color transformation operation on each image in the fused images to obtain corresponding transformed images; acquiring brightness data of the transformed image; determining standard deviation information M of each image based on the brightness data and the time sequence T of each image in the fused image s
Correspondingly, the embodiment of the invention also provides an environment sensing device for rail transit, which comprises: an image acquisition unit configured to acquire continuous frame image information; a first processing unit for performing gaussian filtering processing on the continuous frame image information to obtain primarily processed image information; the image fusion unit is used for performing fusion operation on the primarily processed image information to obtain fused image information; the second processing unit is used for executing reprocessing operation on the fused image information to obtain reprocessed image information; and the environment sensing unit is used for determining environment sensing information of the track traffic based on the reprocessed image information.
Preferably, the apparatus further comprises a sampling unit for: after the continuous frame image information is acquired, acquiring a preset sampling rule; randomly sampling images in the continuous frame image information based on the preset sampling rule to obtain sampled images; and taking the sampled image as new continuous frame image information.
Preferably, the first processing unit includes: a parameter information determining module for determining the continuous frame image information F n Gaussian parameter information P for each image in a plurality of images t The Gaussian parameter information P t Characterized by: p (P) t = { k, σ|k=2t_1, σ=t_0.5 }, where F n ={f t ,f t-1 ,…,f t-n },t∈T + ,f t The sign is the current frame, t is the ordinal number of each image, n is F n The number of images in (a); a Gaussian information determination module for determining the Gaussian parameter information P t Determining a corresponding gaussian distribution matrix G (x, y), said gaussian distribution matrix G (x, y) being characterized by
Figure BDA0003384399270000041
A Gaussian processing module for processing the continuous frame image information F based on the Gaussian distribution matrix G (x, y) n Generating the post-primary-processing image information G n The image information G after preliminary treatment n Characterized by->
Figure BDA0003384399270000042
Figure BDA0003384399270000043
Preferably, the image fusion unit includes: a weighting coefficient determining module for determining a weighting coefficient w of each image in the pre-processed image information t The weighting coefficient w t Characterized by
Figure BDA0003384399270000044
A weighted fusion module for adding based on the addingWeight coefficient w t Performing weighted fusion operation on the current frame to obtain a weighted fused image d t The weighted and fused image d t Characterized as d t =∑g t *w t The method comprises the steps of carrying out a first treatment on the surface of the A fused image determining module for determining a fused image based on the weighted fused image d t And generating fused image information.
Preferably, the second processing unit includes: a standard deviation acquisition module for acquiring standard deviation information M of each image in the fused image information s The method comprises the steps of carrying out a first treatment on the surface of the A normalization module for normalizing the standard deviation information M s Performing normalization operation to obtain normalized information M norm The method comprises the steps of carrying out a first treatment on the surface of the The weighted fusion module is used for carrying out weighted fusion on the weighted fused image d according to a preset rule t Processing to obtain reprocessed image d' t The reprocessed image d' t Characterised by d' t =d t ·M norm +f t ·(1-M norm ) The method comprises the steps of carrying out a first treatment on the surface of the A median filtering module for processing the reprocessed image d' t Performing median filtering processing to obtain a filtered image; and the reprocessed image determining module is used for generating reprocessed image information based on the filtered image.
Preferably, the standard deviation acquisition module is configured to: performing color transformation operation on each image in the fused images to obtain corresponding transformed images; acquiring brightness data of the transformed image; determining standard deviation information M of each image based on the brightness data and the time sequence T of each image in the fused image s
In another aspect, the embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored, where the program when executed by a processor implements the method provided by the embodiment of the present invention.
Through the technical scheme provided by the invention, the invention has at least the following technical effects:
the continuous frame image information in the track transportation process is acquired by adopting the image acquisition device, and is subjected to local neglect processing by Gaussian filtering operation, so that the data volume required to be processed in the environment sensing process is effectively reduced, the data processing efficiency is improved, and the instantaneity and the effectiveness of environment sensing are improved;
secondly, the processed images are further subjected to image weighted fusion, so that the information of the current frame is not excessively lost while the local content is ignored, the motion information in the images of the continuous frames is stored in a fuzzy mode, and the time dimension information of the continuous frames is compressed into the space information of one frame of image;
thirdly, the enhancement effect on static details is achieved by reprocessing the fused image, the pixel dynamic change of the object moving at high speed in the continuous frame image can be effectively reserved, meanwhile, details of the static object without change are reserved, global motion blur and pixel jitter are reduced, and the accuracy of environment perception is effectively improved.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention. In the drawings:
fig. 1 is a flowchart of a specific implementation of an environment awareness method of rail transit provided by an embodiment of the present invention;
fig. 2 is a schematic diagram of performing gaussian filtering processing on an image in the environmental awareness method of rail transit provided by the embodiment of the present invention;
fig. 3 is a schematic diagram of standard deviation information of a calculated image in an environment sensing method of rail transit according to an embodiment of the present invention;
fig. 4 is a schematic diagram of reprocessing a fused image in the environmental awareness method of rail transit provided by the embodiment of the present invention;
fig. 5 is a schematic structural diagram of an environment sensing device for rail transit provided by an embodiment of the present invention.
Detailed Description
The following describes the detailed implementation of the embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
The terms "system" and "network" in embodiments of the invention may be used interchangeably. "plurality" means two or more, and "plurality" may also be understood as "at least two" in this embodiment of the present invention. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/", unless otherwise specified, generally indicates that the associated object is an "or" relationship. In addition, it should be understood that in the description of embodiments of the present invention, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not be construed as indicating or implying a relative importance or order.
Referring to fig. 1, an embodiment of the present invention provides an environment sensing method for rail transit, which includes:
s10) acquiring continuous frame image information;
s20) performing Gaussian filtering processing on the continuous frame image information to obtain primarily processed image information;
s30) performing fusion operation on the primarily processed image information to obtain fused image information;
s40) performing reprocessing operation on the fused image information to obtain reprocessed image information;
s50) determining context awareness information of the rail transit based on the reprocessed image information.
In one possible embodiment, the continuous frame image information is first acquired, for example by an image acquisition device arranged beside the track, and the acquisition of the continuous video information is started after the track vehicle has moved into the vicinity of the image acquisition device, the video information being recorded by the image acquisition deviceThe information is composed of continuous multi-frame images, and continuous frame image information F is generated by collecting continuous frame images in a time period T n For example, continuous frame image information F including n frame images n Characterized as F n ={f t ,f t-1 ,…,f t-n },t∈T + Where t is the current frame number, f t Is the current frame. At this time, in order to reduce the amount of data to be processed, the continuous frame image information F n And performing Gaussian filtering processing to obtain primarily processed image information, then performing fusion operation on the primarily processed image information to obtain fused image information, and further performing reprocessing operation on the fused image information at the moment, for example, reprocessing the fused image information to strengthen static details of each image in the fused image information so as to obtain reprocessed image information, and analyzing the environment of the rail transit tool according to the reprocessed image information at the moment so as to obtain corresponding accurate environment sensing information.
In the embodiment of the invention, the acquired continuous frame image information is processed through Gaussian filtering operation in the process of sensing the environment of the rail transit, so that the local details in the continuous frame image information are greatly reduced, the operation amount is greatly reduced, the operation efficiency is improved, and meanwhile, the static details are further optimized and extracted and analyzed by executing the reprocessing operation of static detail enhancement on the fused image information, the accuracy of environment sensing of the rail transit is improved, and the utilization rate of operation resources is improved.
In order to further improve the efficiency of fusion processing on high-frame-rate video and reduce the operand in the fusion processing, in the embodiment of the present invention, the method further includes: after the continuous frame image information is acquired, acquiring a preset sampling rule; randomly sampling images in the continuous frame image information based on the preset sampling rule to obtain sampled images; and taking the sampled image as new continuous frame image information.
In a possible implementation manner, after the continuous frame image information is acquired, a preset sampling rule is further acquired, for example, the preset sampling rule may be a sampling rule that samples at preset intervals, and according to the preset sampling rule, corresponding frame images are sequentially extracted from the continuous frame image information according to the preset intervals to be used as new continuous frame images, so that more sparse continuous frame images are obtained, and the sampled images are used as new continuous frame image information.
In the embodiment of the invention, the number of the images in the continuous frame image information can be further reduced by sampling the original acquired continuous frame image information, so that the data processing amount is further reduced and the processing efficiency is improved on the basis of further improving the fusion processing efficiency of the high-frame-rate video.
In an embodiment of the present invention, the performing gaussian filtering on the continuous frame image information to obtain the image information after the initial processing includes: determining the continuous frame image information F n Gaussian parameter information P for each image in a plurality of images t The Gaussian parameter information P t Characterized by: p (P) t = { k, σ|k=2t_1, σ=t_0.5 }, where F n ={f t ,f t-1 ,…,f t-n },t∈T + ,f t Characterized as the current frame, t as the ordinal number of each image, n as F n The number of images in (a); based on the Gaussian parameter information P t Determining a corresponding gaussian distribution matrix G (x, y) characterized by
Figure BDA0003384399270000081
Based on the Gaussian distribution matrix G (x, y) and the continuous frame image information F n Generating the post-primary-processing image information G n The image information G after preliminary treatment n Characterized by->
Figure BDA0003384399270000082
In one possible embodiment, for the acquired continuous frame image information F n Each image in the image is processed by Gaussian filtering, firstlyCalculating Gaussian parameter information P of each image according to ordinal number t of each image t For example, the Gaussian parameter information P t Parameters of a two-dimensional Gaussian distribution matrix for the image, e.g. P t = { k, σ|k=2t_1, σ=t_0.5 }, in this case based on the gaussian parameter information P t Further determining a Gaussian distribution matrix (Gaussian kernel) G (x, y), e.g
Figure BDA0003384399270000083
At this time, each image is convolved with the corresponding Gaussian distribution matrix to obtain the continuous frame image information F n Corresponding post-primary-processing image information G n For example
Figure BDA0003384399270000084
The processed image information G n The method is a frame sequence with a time sequence forgetting attribute, and a frame which is farther from the current moment t is subjected to convolution operation with a larger Gaussian distribution matrix, so that a forgetting image with more fuzzy local details is obtained. Referring to fig. 2, it can be seen from the figure that, after the image is subjected to the gaussian filtering treatment, the local details in the image become more blurred, so that the amount of data to be processed in the image is effectively reduced, and the operation efficiency is improved.
In the practical application process, if the processed image information G n The time correlation of the spatial characteristics of different images cannot be represented by performing separate analysis and processing on each image, so that in order to express stronger time correlation in the process of fusing the continuous frame images, the fusion of the continuous frame images is performed by adopting further weighted image fusion operation.
In the embodiment of the present invention, the performing a fusion operation on the pre-processed image information to obtain fused image information includes: determining a weighting coefficient w of each image in the pre-processed image information t The weighting coefficient w t Characterized by
Figure BDA0003384399270000091
Based on the weighting coefficient w t Performing weighted fusion operation on the current frame to obtain a weighted fused image d t The weighted and fused image d t Characterized as d t =∑g t *w t The method comprises the steps of carrying out a first treatment on the surface of the Based on the weighted fused image d t And generating fused image information.
In one possible embodiment, the weighting factor w of each image in the pre-processed image information is first obtained t For example
Figure BDA0003384399270000092
Then according to the weighting coefficient w t Weighting each image to obtain a weighted fused image d t For example d t =∑g t *w t At this time, all weighted fused images d are combined t Corresponding fused image information is generated, e.g. the fused image information is all weighted fused images d t Is a set of (3).
In the embodiment of the invention, each image is further fused in a weighted fusion mode, so that each image comprises the change information of each pixel in the preamble frame relative to the current frame, namely, the dynamic change of the pixels in the finite sequence is compressed into one frame of image for expression, thereby greatly improving the time relevance of each image in the aspect of spatial characteristics and improving the perception accuracy in the subsequent environment perception process.
In the practical application process, although the image in the fused image information contains pixel dynamic change information to a certain extent, the information is global information, so that dynamic blurring of the image is easy to occur, and a certain trouble is caused for subsequent environment perception analysis and recognition.
In order to solve the above technical problem, in an embodiment of the present invention, the performing a reprocessing operation on the fused image information to obtain reprocessed image information includes: acquiring the fused image informationStandard deviation information M of each image in the rest s The method comprises the steps of carrying out a first treatment on the surface of the For the standard deviation information M s Performing normalization operation to obtain normalized information M norm The method comprises the steps of carrying out a first treatment on the surface of the The weighted and fused image d is subjected to weighting according to a preset rule t Processing to obtain reprocessed image d' t The reprocessed image d' t Characterised by d' t =d t ·M norm +f t ·(1- M norm ) The method comprises the steps of carrying out a first treatment on the surface of the For the reprocessed image d' t Performing median filtering processing to obtain a filtered image; and generating reprocessed image information based on the filtered processed image.
Further, in an embodiment of the present invention, the standard deviation information M of each image in the fused image information is obtained s Comprising: performing color transformation operation on each image in the fused images to obtain corresponding transformed images; acquiring brightness data of the transformed image; determining standard deviation information M of each image based on the brightness data and the time sequence T of each image in the fused images s
In a possible implementation manner, before performing the environmental sensing, standard deviation information M of each image in the fused image information is further acquired s Specifically, a color transformation operation is performed on each image in the fused images to obtain a corresponding transformed image, for example, RGB colors of each image are transformed into HSV space, the HSV space is a model including Hue (Hue), saturation (Saturation) and brightness (Value) color parameters, at this time, the brightness data is further extracted, and standard deviation information M of each image is determined according to the brightness data and the time sequence T of each image s For example, the brightness data is used as data for calculating a standard deviation mask, and the standard deviation of the pixel value of each position on the T latitude along the time sequence T direction in each image is calculated, so that the standard deviation information M of each image is obtained s Referring to fig. 3, the standard deviation information M of each image is calculated according to the embodiment of the present invention s Is a schematic diagram of (a).
In the embodiment of the present invention, the standard deviation of each pixel position on each image is determined by calculating the standard deviation of each image, that is, the brightness variation degree of each pixel in the time sequence direction in the fused image information is determined, for example, in fig. 3, it can be seen that after the standard deviation processing, the gray scale of the static object is less, for example, the road, the sky, the building and the like in the image have the characteristic of large area of the same gray scale value in the image, and the gray scale variation between frames is very tiny relative to the moving object, for example, relative to the automobile.
In order to further reduce the overall pixel dithering and blurring effect, the image after calculating the standard deviation is further processed. For example, for the standard deviation information M s Performing normalization operation to obtain normalized information M norm Then weighting the fused image d according to a preset rule t Processing to obtain reprocessed image d' t For example d' t =d t ·M norm +f t ·(1-M norm ). In the embodiment of the invention, the weighted and fused image d is obtained through the calculation rule t Corresponding to the original current frame image f t Fusion is performed so that the current frame image f t Is covered by the detail of the weighted fused image d t The region with smaller dynamic change effectively reduces the overall pixel dithering and blurring effect, realizes the enhancement effect on static details, and refers to fig. 4, which shows that the reprocessed image d 'is obtained after the reprocessing of the fused image provided by the embodiment of the invention' t Is a schematic diagram of (a).
At this time, the reprocessed image d 'is further processed' t Performing median filtering processing to remove the reprocessed image d' t The particle noise in the image processing unit is used for further improving the data accuracy and reliability of each image, obtaining filtered images, generating reprocessed image information, namely a set of the filtered images, based on each filtered image, and performing environment sensing operation of the rail transit based on the reprocessed image information at the moment, so that the environment sensing information of the rail transit can be rapidly and accurately determined.
In the embodiment of the invention, the data optimization and detail enhancement processing are carried out on the multi-frame continuous images acquired in the track traffic running process, so that the data volume required to be processed in the environment sensing process can be effectively reduced, the environment sensing efficiency is improved, the accuracy of sensing dynamic details is effectively improved, the accuracy of environment sensing is improved, and the running safety of the track traffic is improved.
The following describes an environment sensing device for rail transit provided by an embodiment of the present invention with reference to the accompanying drawings.
Referring to fig. 5, based on the same inventive concept, an embodiment of the present invention provides an environment sensing device for rail transit, the device includes: an image acquisition unit configured to acquire continuous frame image information; a first processing unit for performing gaussian filtering processing on the continuous frame image information to obtain primarily processed image information; the image fusion unit is used for performing fusion operation on the primarily processed image information to obtain fused image information; the second processing unit is used for executing reprocessing operation on the fused image information to obtain reprocessed image information; and the environment sensing unit is used for determining environment sensing information of the track traffic based on the reprocessed image information.
In an embodiment of the present invention, the apparatus further includes a sampling unit, where the sampling unit is configured to: after the continuous frame image information is acquired, acquiring a preset sampling rule; randomly sampling images in the continuous frame image information based on the preset sampling rule to obtain sampled images; and taking the sampled image as new continuous frame image information.
In an embodiment of the present invention, the first processing unit includes: a parameter information determining module for determining the continuous frame image information F n Gaussian parameter information P for each image in a plurality of images t The Gaussian parameter information P t Characterized by: p (P) t = { k, σ|k=2t_1, σ=t_0.5 }, where,
F n ={f t ,f t-1 ,…,f t-n },t∈T + ,f t characterized as the current frame, t as the ordinal number of each image, n as F n The number of images in (a); a Gaussian information determination module for determining the Gaussian parameter information P t Determining a corresponding gaussian distribution matrix G (x, y), said gaussian distribution matrix G (x, y) being characterized by
Figure BDA0003384399270000121
A Gaussian processing module for processing the continuous frame image information F based on the Gaussian distribution matrix G (x, y) n Generating the post-primary-processing image information G n The image information G after the primary treatment n Characterized by->
Figure BDA0003384399270000122
In an embodiment of the present invention, the image fusion unit includes: a weighting coefficient determining module for determining a weighting coefficient w of each image in the pre-processed image information t The weighting coefficient w t Characterized by
Figure BDA0003384399270000123
A weighted fusion module for based on the weighting coefficient w t Performing weighted fusion operation on the current frame to obtain a weighted fused image d t The weighted and fused image d t Characterized as d t = ∑g t *w t The method comprises the steps of carrying out a first treatment on the surface of the A fused image determining module for determining a fused image based on the weighted fused image d t And generating fused image information.
In an embodiment of the present invention, the second processing unit includes: a standard deviation acquisition module for acquiring standard deviation information M of each image in the fused image information s The method comprises the steps of carrying out a first treatment on the surface of the A normalization module for normalizing the standard deviation information M s Performing normalization operation to obtain normalized information M norm The method comprises the steps of carrying out a first treatment on the surface of the The weighted fusion module is used for carrying out weighted fusion on the weighted fused image d according to a preset rule t Processing to obtain reprocessed image d' t The reprocessed image d' t Characterised by d' t =d t ·M norm +f t ·(1-M norm ) The method comprises the steps of carrying out a first treatment on the surface of the A median filtering module for processing the reprocessed image d' t Performing median filtering processing to obtain a filtered image; and the reprocessed image determining module is used for generating reprocessed image information based on the filtered image.
In an embodiment of the present invention, the standard deviation obtaining module is configured to: performing color transformation operation on each image in the fused images to obtain corresponding transformed images; acquiring brightness data of the transformed image; determining standard deviation information M of each image based on the brightness data and the time sequence T of each image in the fused image s
Further, the embodiment of the present invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method described in the embodiment of the present invention.
The foregoing details of the optional implementation of the embodiment of the present invention have been described in detail with reference to the accompanying drawings, but the embodiment of the present invention is not limited to the specific details of the foregoing implementation, and various simple modifications may be made to the technical solution of the embodiment of the present invention within the scope of the technical concept of the embodiment of the present invention, and these simple modifications all fall within the protection scope of the embodiment of the present invention.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, various possible combinations of embodiments of the present invention are not described in detail.
Those skilled in the art will appreciate that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, and including instructions for causing a single-chip microcomputer, chip or processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In addition, any combination of various embodiments of the present invention may be performed, so long as the concept of the embodiments of the present invention is not violated, and the disclosure of the embodiments of the present invention should also be considered.

Claims (13)

1. A method of environmental awareness for rail transit, the method comprising:
acquiring continuous frame image information;
performing Gaussian filtering processing on the continuous frame image information to obtain primarily processed image information;
performing fusion operation on the primarily processed image information to obtain fused image information;
performing reprocessing operation on the fused image information to obtain reprocessed image information;
and determining environment perception information of the track traffic based on the reprocessed image information.
2. The method according to claim 1, wherein the method further comprises:
after the continuous frame image information is acquired, acquiring a preset sampling rule;
randomly sampling images in the continuous frame image information based on the preset sampling rule to obtain sampled images;
and taking the sampled image as new continuous frame image information.
3. The method according to claim 1, wherein said performing gaussian filtering processing on said successive frame image information to obtain post-primary-processing image information, comprises:
determining the continuous frame image information F n Gaussian parameter information P for each image in a plurality of images t The Gaussian parameter information P t Characterized by: p (P) t ={k,σ|k=2t-1,σ=t-0.5},
Wherein F is n ={f t ,f t-1 ,…,f t-n },t∈T + ,f t Characterized as the current frame, t as the ordinal number of each image, n as F n The number of images in (a);
based on the Gaussian parameter information P t Determining a corresponding gaussian distribution matrix G (x, y), said gaussian distribution matrix G (x, y) being characterized by
Figure FDA0003384399260000011
Based on the Gaussian distribution matrix G (x, y) and the continuous frame image information F n Generating the post-primary-processing image information G n The image information G after preliminary treatment n Characterized by
Figure FDA0003384399260000012
Figure FDA0003384399260000013
4. A method according to claim 3, wherein said performing a fusion operation on said as-processed image information to obtain fused image information comprises:
determining a weighting coefficient w of each image in the pre-processed image information t The weighting coefficient w t Characterized by
Figure FDA0003384399260000021
Based on the weighting coefficient w t Performing weighted fusion operation on the current frame to obtain a weighted fused image d t The weighted and fused image d t Characterized as d t =∑g t *w t
Based on the weighted fused image d t And generating fused image information.
5. The method of claim 4, wherein performing a reprocessing operation on the fused image information to obtain reprocessed image information comprises:
acquiring standard deviation information M of each image in the fused image information s
For the standard deviation information M s Performing normalization operation to obtain normalized information M norm
The weighted and fused image d is subjected to weighting according to a preset rule t Processing to obtain reprocessed image d' t The reprocessed image d' t Characterised by d' t =d t ·M norm +f t ·(1-M norm );
For the reprocessed image d' t Performing median filtering processing to obtain a filtered image;
and generating reprocessed image information based on the filtered processed image.
6. The method according to claim 5, wherein the standard deviation information M of each image in the fused image information is obtained s Comprising:
performing color transformation operation on each image in the fused images to obtain corresponding transformed images;
acquiring brightness data of the transformed image;
determining standard deviation information M of each image based on the brightness data and the time sequence T of each image in the fused image s
7. An environmental awareness apparatus for rail transit, the apparatus comprising:
an image acquisition unit configured to acquire continuous frame image information;
a first processing unit for performing gaussian filtering processing on the continuous frame image information to obtain primarily processed image information;
the image fusion unit is used for performing fusion operation on the primarily processed image information to obtain fused image information;
the second processing unit is used for executing reprocessing operation on the fused image information to obtain reprocessed image information;
and the environment sensing unit is used for determining environment sensing information of the track traffic based on the reprocessed image information.
8. The apparatus of claim 7, further comprising a sampling unit to:
after the continuous frame image information is acquired, acquiring a preset sampling rule;
randomly sampling images in the continuous frame image information based on the preset sampling rule to obtain sampled images;
and taking the sampled image as new continuous frame image information.
9. The apparatus of claim 7, wherein the first processing unit comprises:
a parameter information determining module for determining the continuous frame image information F n Gaussian parameter information P for each image in a plurality of images t The Gaussian parameter information P t Characterized by: p (P) t ={k,σ|k=2t-1,σ=t-0.5},
Wherein F is n ={f t ,f t-1 ,…,f t-n },t∈T + ,f t Characterized as the current frame, t as the ordinal number of each image, n as F n The number of images in (a);
a Gaussian information determination module for determining the Gaussian parameter information P t Determining a corresponding gaussian distribution matrix G (x, y), said gaussian distribution matrix G (x, y) being characterized by
Figure FDA0003384399260000031
A Gaussian processing module for processing the continuous frame image information F based on the Gaussian distribution matrix G (x, y) n GeneratingThe image information G after preliminary treatment n The image information G after preliminary treatment n Characterized by
Figure FDA0003384399260000041
10. The apparatus of claim 9, wherein the image fusion unit comprises:
a weighting coefficient determining module for determining a weighting coefficient w of each image in the pre-processed image information t The weighting coefficient w t Characterized by
Figure FDA0003384399260000042
A weighted fusion module for based on the weighting coefficient w t Performing weighted fusion operation on the current frame to obtain a weighted fused image d t The weighted and fused image d t Characterized as d t =∑g t *w t
A fused image determining module for determining a fused image based on the weighted fused image d t And generating fused image information.
11. The apparatus of claim 10, wherein the second processing unit comprises:
a standard deviation acquisition module for acquiring standard deviation information M of each image in the fused image information s
A normalization module for normalizing the standard deviation information M s Performing normalization operation to obtain normalized information M norm
The weighted fusion module is used for carrying out weighted fusion on the weighted fused image d according to a preset rule t Processing to obtain reprocessed image d' t The reprocessed image d' t Characterised by d' t =d t ·M norm +f t ·(1-M norm );
A median filtering module for the re-processingReason image d' t Performing median filtering processing to obtain a filtered image;
and the reprocessed image determining module is used for generating reprocessed image information based on the filtered image.
12. The apparatus of claim 11, wherein the standard deviation acquisition module is configured to:
performing color transformation operation on each image in the fused images to obtain corresponding transformed images;
acquiring brightness data of the transformed image;
determining standard deviation information M of each image based on the brightness data and the time sequence T of each image in the fused image s
13. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method of any of claims 1-6.
CN202111443634.0A 2021-11-30 2021-11-30 Environment sensing method and environment sensing device for rail transit Pending CN116205795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111443634.0A CN116205795A (en) 2021-11-30 2021-11-30 Environment sensing method and environment sensing device for rail transit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111443634.0A CN116205795A (en) 2021-11-30 2021-11-30 Environment sensing method and environment sensing device for rail transit

Publications (1)

Publication Number Publication Date
CN116205795A true CN116205795A (en) 2023-06-02

Family

ID=86506416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111443634.0A Pending CN116205795A (en) 2021-11-30 2021-11-30 Environment sensing method and environment sensing device for rail transit

Country Status (1)

Country Link
CN (1) CN116205795A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112240A (en) * 2023-10-24 2023-11-24 北京木牛一心机器人科技有限公司 Environment sensing method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117112240A (en) * 2023-10-24 2023-11-24 北京木牛一心机器人科技有限公司 Environment sensing method, device, computer equipment and storage medium
CN117112240B (en) * 2023-10-24 2024-01-19 北京木牛一心机器人科技有限公司 Environment sensing method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110781768A (en) Target object detection method and device, electronic device and medium
CN110557521B (en) Method, device and equipment for removing rain from video and computer readable storage medium
CN111627057B (en) Distance measurement method, device and server
CN112261403B (en) Device and method for detecting dirt of vehicle-mounted camera
Gluhaković et al. Vehicle detection in the autonomous vehicle environment for potential collision warning
CN112446316A (en) Accident detection method, electronic device, and storage medium
CN112613434A (en) Road target detection method, device and storage medium
Yu et al. Realization of a real-time image denoising system for dashboard camera applications
CN116205795A (en) Environment sensing method and environment sensing device for rail transit
CN110728193A (en) Method and device for detecting richness characteristics of face image
CN112446292B (en) 2D image salient object detection method and system
CN113033715A (en) Target detection model training method and target vehicle detection information generation method
CN112241963A (en) Lane line identification method and system based on vehicle-mounted video and electronic equipment
JP2018124963A (en) Image processing device, image recognition device, image processing program, and image recognition program
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
Suganya et al. Gradient flow-based deep residual networks for enhancing visibility of scenery images degraded by foggy weather conditions
WO2018143278A1 (en) Image processing device, image recognition device, image processing program, and image recognition program
WO2021014809A1 (en) Image recognition evaluation program, image recognition evaluation method, evaluation device, and evaluation system
CN114399671A (en) Target identification method and device
Mühlhaus et al. Vehicle classification in urban regions of the Global South from aerial imagery
CN117274957B (en) Road traffic sign detection method and system based on deep learning
CN113850219B (en) Data collection method, device, vehicle and storage medium
Mien et al. Estimating Traffic Density in Uncertain Environment: A Case Study of Danang, Vietnam
CN111008544B (en) Traffic monitoring and unmanned auxiliary system and target detection method and device
Trần et al. An object detection method for aerial hazy images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination