CN113891048A - Over-sight distance image transmission system for rail locomotive - Google Patents

Over-sight distance image transmission system for rail locomotive Download PDF

Info

Publication number
CN113891048A
CN113891048A CN202111262093.1A CN202111262093A CN113891048A CN 113891048 A CN113891048 A CN 113891048A CN 202111262093 A CN202111262093 A CN 202111262093A CN 113891048 A CN113891048 A CN 113891048A
Authority
CN
China
Prior art keywords
scene video
time
unit
scene
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111262093.1A
Other languages
Chinese (zh)
Other versions
CN113891048B (en
Inventor
王晓鹏
何成虎
戴相龙
李学钧
蒋勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Haohan Information Technology Co ltd
Original Assignee
Jiangsu Haohan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Haohan Information Technology Co ltd filed Critical Jiangsu Haohan Information Technology Co ltd
Priority to CN202111262093.1A priority Critical patent/CN113891048B/en
Publication of CN113891048A publication Critical patent/CN113891048A/en
Application granted granted Critical
Publication of CN113891048B publication Critical patent/CN113891048B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a beyond-the-visual-range image transmission system for a rail locomotive. The method comprises the following steps: the image acquisition module: the system comprises a locomotive head part, a high-definition image camera, a scene video and a time mark, wherein the high-definition image camera is preset at the locomotive head part and is used for shooting a scene of the running direction of a train, generating a scene video and carrying out time mark on the scene video; and (4) relay transmission: the system comprises a high-definition data acquisition module, a tail synchronous transmission module, a mobile communication network and a train, wherein the high-definition data acquisition module is used for acquiring high-definition data of a locomotive head and the tail synchronous transmission module; the tail synchronous transmission module: the system comprises a mobile communication network, a relay antenna, a time marker acquisition module, a terminal device and a display module, wherein the mobile communication network is used for transmitting a scene video to the terminal device of a locomotive tail driver, the time marker acquisition module is used for respectively acquiring the time marker of the scene video transmitted by the mobile communication network and the time marker of the scene video transmitted by the relay antenna, transmitting the scene video subjected to time marking to the terminal device of the locomotive tail driver, and playing the scene video when the time markers are the same.

Description

Over-sight distance image transmission system for rail locomotive
Technical Field
The invention relates to the technical field of train driving, in particular to a beyond-the-horizon image picture transmission system for a rail locomotive.
Background
At present, in the technical field of rail transit, a train belongs to popular rail transit equipment, two locomotives possibly exist in the train, and when the train runs at ordinary times, the train is pulled to run only through the locomotive at the head part, and the locomotive at the tail part does not play any role. When ascending, the train is driven to run and climb by the head and the tail of the train, and at the moment, the train is driven to run synchronously, the tail locomotive pushes the train to run, and the head locomotive pulls the train to run. The locomotive and the train tail are both manually operated by train drivers. However, in this case, the driver of the train at the end of the train cannot see the condition of the front locomotive and the train car, resulting in a lack of visibility.
In practical situations, the train driver at the end of the train extends the head out of the window to observe the details of the forward direction. At this time, a phenomenon that information is asymmetrical is generated for a train driver at the tail of the train. Moreover, because the driver at the tail of the train needs to operate the train because the head of the driver needs to extend out of the window, great obstacles are brought to the train driver to operate the train, and the driver needs to operate and observe the train, so that the operation errors are easy. And the head extends out of the window, so that in the weather of hail, rain or yellow sand and the like, the body of a driver at the tail of the train is greatly injured, and the observation is inaccurate or even dare not to be observed. Therefore, an over-the-horizon synchronous scene transmission system is needed to help the truck driver at the tail of the train to observe the train more conveniently.
Disclosure of Invention
The invention provides a track locomotive over-the-horizon image picture transmission system, which is used for solving the above situation.
A rail locomotive beyond visual range image picture transmission system comprises:
the image acquisition module: the system comprises a locomotive head part, a high-definition image camera, a scene video and a time mark, wherein the high-definition image camera is preset at the locomotive head part and is used for shooting a scene of the running direction of a train, generating a scene video and carrying out time mark on the scene video;
a relay transmission module: the system comprises a high-definition data acquisition module, a tail synchronous transmission module, a mobile communication network and a train, wherein the high-definition data acquisition module is used for acquiring high-definition data of a locomotive head and the tail synchronous transmission module;
the tail synchronous transmission module: the system comprises a mobile communication network, a relay antenna, a time marker acquisition module, a terminal device and a display module, wherein the mobile communication network is used for transmitting a scene video to the terminal device of a locomotive tail driver, the time marker acquisition module is used for respectively acquiring the time marker of the scene video transmitted by the mobile communication network and the time marker of the scene video transmitted by the relay antenna, transmitting the scene video subjected to time marking to the terminal device of the locomotive tail driver, and playing the scene video when the time markers are the same.
As an embodiment of the present invention: the image acquisition module includes:
a deployment unit: the system comprises a camera combination unit, a camera unit and a control unit, wherein the camera combination unit is used for combining a plurality of cameras arranged at the head of a train into at least one shooting unit in a mode that at least two cameras form one camera combination;
a collecting unit: acquiring a two-dimensional image map acquired by a photographing unit in each acquisition period, and combining a current camera with a plurality of frames of two-dimensional image maps acquired in sequence in the current acquisition period;
an image processing unit: the foreground extraction module is used for extracting the foreground from the multi-frame two-dimensional image map to obtain a plurality of two-dimensional foreground images; projecting the two-dimensional foreground images into a three-dimensional space to obtain three-dimensional foreground images; the method comprises the steps that a plurality of frames of two-dimensional image maps, a plurality of two-dimensional foreground images and a plurality of three-dimensional foreground images are acquired at the same time in a current acquisition cycle;
the calculating unit is used for calculating the change condition of the corresponding position of the no-frame image in the scene video in the plurality of two-dimensional foreground images; the point in the scene video is a corresponding point of each two-dimensional foreground point in the two-dimensional foreground image in the scene video; the two-dimensional foreground points are pixel points in the two-dimensional foreground image;
an image change unit: the system comprises a plurality of two-dimensional foreground images, a plurality of three-dimensional foreground images and a plurality of image processing units, wherein the three-dimensional foreground images are used for acquiring a plurality of three-dimensional foreground images corresponding to points in a scene video;
a three-dimensional fusion unit: the system comprises a video acquisition unit, a scene fusion unit, a monitoring scene acquisition unit and a fusion processing unit, wherein the video acquisition unit is used for performing fusion processing on three-dimensional foreground points in three-dimensional foreground images corresponding to the combination of cameras in the monitoring scene at the same acquisition time according to a rule that all the three-dimensional foreground points corresponding to the same point in the scene video are fused into one three-dimensional fusion foreground point at the current acquisition time, and all the three-dimensional fusion foreground points obtained after fusion are combined together to form a three-dimensional fusion foreground image at the acquisition time; the three-dimensional foreground point is a pixel point of the three-dimensional foreground image;
a three-dimensional change unit: the system comprises a plurality of three-dimensional fusion foreground images, a plurality of image acquisition devices and a plurality of image acquisition devices, wherein the image acquisition devices are used for acquiring a plurality of three-dimensional fusion foreground images of a scene video;
a scene distribution unit: the method is used for determining a target object and determining the spatial distribution characteristic and the operation distribution characteristic of the target object according to the plurality of three-dimensional fusion foreground images of each acquisition time of the current acquisition period and the change condition of the corresponding position of the point in the scene video in the plurality of three-dimensional fusion foreground images of each acquisition time of the current acquisition period.
As an embodiment of the present invention: the image acquisition module includes:
panorama playback unit: the high-definition image camera is used for synchronously acquiring 360-degree panoramic images;
an image mapping unit: the system is used for carrying out space mapping on the 360-degree panoramic image and converting the 360-degree panoramic image into space data; which is a mixture of a plurality of kinds of the rice,
the mapping is mapped by:
Figure BDA0003326132200000041
Figure BDA0003326132200000042
wherein H is the height of the scene of the image scene video, f is the normalized focal length of the high-definition image camera,
Figure BDA0003326132200000043
the inclination angle of the high-definition image camera to the horizontal plane is set; alpha is the abscissa of the image coordinate; v is the ordinate of the image coordinate; x is the abscissa of the actual coordinate; y is the ordinate of the actual coordinate.
As an embodiment of the present invention: the image acquisition module includes:
a dynamic data acquisition module: for establishing a unique data transmission channel for rail vehicles and high-definition video cameras, wherein,
a dynamic data transmission rule is arranged between the high-definition image camera and the rail vehicle, and the high-definition image camera updates the scene video in real time according to the dynamic data transmission rule;
a data verification module: the method is used for acquiring real-time scene dynamic change data through a high-definition video camera and verifying whether the scene video is correct or not through the real-time scene dynamic change data.
As an embodiment of the present invention: the relay module includes:
a mobile communication unit: a mobile network transmission channel for constructing the scene video transmission through a mobile communication network;
the intermediate transmission unit: and the relay transmission channel is used for constructing the scene video transmission through a plurality of groups of relay antennas arranged in an array manner.
As an embodiment of the present invention: the relay module further comprises:
an identification unit: the system comprises a scene video, a data transmission channel and a data transmission channel, wherein the scene video is used for identifying each frame of image type information in the scene video, and transmitting different types of images through different data transmission channels according to the image type of each frame of image;
an interface unit: the system comprises a high-definition camera, a locomotive tail part and driver terminal equipment, wherein the high-definition camera is used for acquiring scene video information of the driver terminal equipment;
labeling unit: the system is used for receiving the scene video and marking the scene video; the labeling comprises the following steps: time labeling and element labeling;
a statistic unit: the method is used for summarizing and counting the data in the videos of all scenes to obtain the optimal data parameter field information under each data item information.
As an embodiment of the present invention: the tail synchronous transmission module comprises:
a first acquisition unit: the system comprises a mobile communication network, a time marker and a time marker, wherein the mobile communication network is used for receiving a scene video transmitted by the mobile communication network and determining first time information according to the time marker in the scene video;
a second acquisition unit: the relay antenna is used for receiving the scene video transmitted by the relay antenna and determining second time information according to the time mark in the scene video;
a judging unit: and the time nodes used for respectively determining the first time phase and the second time phase correspond to each other, and after the first time node corresponds to the second time node, the scene video is synchronously transmitted to the terminal equipment of the locomotive tail driver and played.
As an embodiment of the present invention: the tail synchronous transmission module further comprises:
an abnormality determination unit: the CPU processor is used for carrying out video integrity detection on the scene video through the relay antenna and determining an abnormal value according to a detection result;
a space calling unit: the method comprises the steps of obtaining three-dimensional relations between analysis angular difference, specific difference and medium loss of a scene video in any time period and the scene video;
a trend prediction unit: the method is used for predicting the trend of the scene video according to the angular difference, the specific difference and the medium loss of the scene video;
a transmission loss judgment unit: the relay antenna is used for calculating electric quantity according to the voltage and current values of the relay antenna and determining transmission loss;
a correction unit: and the scene video transmitted to the terminal equipment of the locomotive tail driver is corrected according to the transmission loss.
As an embodiment of the present invention: the abnormality determination unit determining an abnormal value includes the steps of:
step 1: acquiring a time mark of a scene video, and establishing a time sequence chart of the scene video; wherein the content of the first and second substances,
the abscissa of the timing diagram is time; the ordinate of the timing diagram is a video feature;
step 2: according to the sequence diagram, establishing an autocorrelation function of the scene video:
Figure BDA0003326132200000061
wherein, CkAn autocorrelation function representing a scene video; a istRepresenting the video characteristics of the scene video at the time t; a ist-kRepresenting the video characteristics of a scene video at the time t-k, wherein k belongs to N and is a time independent variable; cov denotes covariance; da (Da)tRepresenting the integrity of the scene video at the time t; da (Da)t-kRepresenting the integrity of the scene video at the time t-k;
and step 3: according to the autocorrelation function, determining an abnormal value of the scene video:
Figure BDA0003326132200000062
wherein Y represents an abnormal value; when the autocorrelation function is 1, the outlier is also 1; when the autocorrelation function is greater than 1, the outlier is also greater than 1.
As an embodiment of the present invention: the correction unit corrects the scene video, and comprises the following steps:
step S1: acquiring the scene video, and determining the root mean square of the scene video:
Figure BDA0003326132200000071
wherein S represents the root mean square of the scene video; t represents the time constant of the scene video;
Figure BDA0003326132200000074
a feature mean representing a scene video;
step S2: determining a compensation value of the scene video according to the root mean square and the transmission loss by the following formula:
Figure BDA0003326132200000072
wherein g (a)tQ) represents the relevance of the scene video and the transmission loss at the time t; q represents a loss parameter of transmission loss; y istRepresenting the integrity of video elements of the scene video at the time t;
step S3: and according to the compensation value, correcting the original scene video:
Figure BDA0003326132200000073
where z represents a compensation correction value of the accumulated sum of the corrected scene videos.
The invention has the beneficial effects that: the invention can enable the driver at the tail of the train to observe the running scene videos of the train when the double-end train goes up a slope or other drivers need to synchronously control the train to run before and after, and the scene videos help the driver at the tail of the train to observe the running scene of the train in the driving cab, thereby playing the role of over-sight observation and preventing the driver from observing the running scene outside the window at the deep head of the train.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a system diagram of a beyond-the-horizon image graph transmission system for a rail vehicle according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, the present invention is a system for transmitting over-the-horizon images of a rail locomotive, comprising:
the image acquisition module: the system comprises a locomotive head part, a high-definition image camera, a scene video and a time mark, wherein the high-definition image camera is preset at the locomotive head part and is used for shooting a scene of the running direction of a train, generating a scene video and carrying out time mark on the scene video;
a relay transmission module: the system comprises a high-definition data acquisition module, a tail synchronous transmission module, a mobile communication network and a train, wherein the high-definition data acquisition module is used for acquiring high-definition data of a locomotive head and the tail synchronous transmission module;
the tail synchronous transmission module: the system comprises a mobile communication network, a relay antenna, a time marker acquisition module, a terminal device and a display module, wherein the mobile communication network is used for transmitting a scene video to the terminal device of a locomotive tail driver, the time marker acquisition module is used for respectively acquiring the time marker of the scene video transmitted by the mobile communication network and the time marker of the scene video transmitted by the relay antenna, transmitting the scene video subjected to time marking to the terminal device of the locomotive tail driver, and playing the scene video when the time markers are the same.
The invention has the beneficial effects that: the invention can enable the driver at the tail of the train to observe the running scene videos of the train when the double-end train goes up a slope or other drivers need to synchronously control the train to run before and after, and the scene videos help the driver at the tail of the train to observe the running scene of the train in the driving cab, thereby playing the role of over-sight observation and preventing the driver from observing the running scene outside the window at the deep head of the train.
As an embodiment of the present invention: the image acquisition module includes:
a deployment unit: the system comprises a camera combination unit, a camera unit and a control unit, wherein the camera combination unit is used for combining a plurality of cameras arranged at the head of a train into at least one shooting unit in a mode that at least two cameras form one camera combination;
the multiple cameras are combined to be capable of shooting a complete scene video.
A collecting unit: acquiring a two-dimensional image map acquired by a photographing unit in each acquisition period, and combining a current camera with a plurality of frames of two-dimensional image maps acquired in sequence in the current acquisition period;
in order to ensure the comprehensiveness of data acquisition, the invention sets a periodic acquisition mode and a multi-frame acquisition mode.
An image processing unit: the foreground extraction module is used for extracting the foreground from the multi-frame two-dimensional image map to obtain a plurality of two-dimensional foreground images; projecting the two-dimensional foreground images into a three-dimensional space to obtain three-dimensional foreground images; the method comprises the steps that a plurality of frames of two-dimensional image maps, a plurality of two-dimensional foreground images and a plurality of three-dimensional foreground images are acquired at the same time in a current acquisition cycle; the invention can realize the axis-by-axis stacking acquisition based on the space coordinate system when acquiring the image because the video is acquired, thereby ensuring the data integrity.
The calculating unit is used for calculating the change condition of the corresponding position of the no-frame image in the scene video in the plurality of two-dimensional foreground images; the point in the scene video is a corresponding point of each two-dimensional foreground point in the two-dimensional foreground image in the scene video; the two-dimensional foreground points are pixel points in the two-dimensional foreground image; when the calculation unit calculates the change condition, the calculation unit calculates in a mode corresponding to the pixel points, and if the pixel points are different, the calculation unit indicates that the acquired data are wrong.
An image change unit: the system comprises a plurality of two-dimensional foreground images, a plurality of three-dimensional foreground images and a plurality of image processing units, wherein the three-dimensional foreground images are used for acquiring a plurality of three-dimensional foreground images corresponding to points in a scene video;
a three-dimensional fusion unit: the system comprises a video acquisition unit, a scene fusion unit, a monitoring scene acquisition unit and a fusion processing unit, wherein the video acquisition unit is used for performing fusion processing on three-dimensional foreground points in three-dimensional foreground images corresponding to the combination of cameras in the monitoring scene at the same acquisition time according to a rule that all the three-dimensional foreground points corresponding to the same point in the scene video are fused into one three-dimensional fusion foreground point at the current acquisition time, and all the three-dimensional fusion foreground points obtained after fusion are combined together to form a three-dimensional fusion foreground image at the acquisition time; the three-dimensional foreground point is a pixel point of the three-dimensional foreground image;
a three-dimensional change unit: the system comprises a plurality of three-dimensional fusion foreground images, a plurality of image acquisition devices and a plurality of image acquisition devices, wherein the image acquisition devices are used for acquiring a plurality of three-dimensional fusion foreground images of a scene video;
a scene distribution unit: the method is used for determining a target object and determining the spatial distribution characteristic and the operation distribution characteristic of the target object according to the plurality of three-dimensional fusion foreground images of each acquisition time of the current acquisition period and the change condition of the corresponding position of the point in the scene video in the plurality of three-dimensional fusion foreground images of each acquisition time of the current acquisition period.
As an embodiment of the present invention: the image acquisition module includes:
panorama playback unit: the high-definition image camera is used for synchronously acquiring 360-degree panoramic images; in order to obtain a video with a more comprehensive visual angle, the invention adopts a 360-degree panoramic camera.
An image mapping unit: the system is used for carrying out space mapping on the 360-degree panoramic image and converting the 360-degree panoramic image into space data; which is a mixture of a plurality of kinds of the rice,
the mapping is mapped by:
Figure BDA0003326132200000111
Figure BDA0003326132200000112
wherein H is the height of the scene of the image scene video, f is the normalized focal length of the high-definition image camera,
Figure BDA0003326132200000113
the inclination angle of the high-definition image camera to the horizontal plane is set; alpha is the abscissa of the image coordinate; v is the ordinate of the image coordinate; x is the abscissa of the actual coordinate; y is the ordinate of the actual coordinate.
The purpose of spatial mapping is to perform video fusion on images acquired by different cameras and video cameras, so that the integrity and comprehensiveness of the video are guaranteed.
As an embodiment of the present invention: the image acquisition module includes:
a dynamic data acquisition module: for establishing a unique data transmission channel for rail vehicles and high-definition video cameras, wherein,
a dynamic data transmission rule is arranged between the high-definition image camera and the rail vehicle, and the high-definition image camera updates the scene video in real time according to the dynamic data transmission rule; the dynamic data transmission rules are for location-unique transmission and efficient transmission. And further realize real-time updating.
A data verification module: the method is used for acquiring real-time scene dynamic change data through a high-definition video camera and verifying whether the scene video is correct or not through the real-time scene dynamic change data. The shot video may have errors, and because the number of cameras is large, the invention can judge whether the scene video is correct or not according to the scene dynamic change data.
As an embodiment of the present invention: the relay module includes:
a mobile communication unit: a mobile network transmission channel for constructing the scene video transmission through a mobile communication network;
the intermediate transmission unit: and the relay transmission channel is used for constructing the scene video transmission through a plurality of groups of relay antennas arranged in an array manner.
The invention relates to a railway vehicle, which has the problems of unstable or interrupted signal transmission of a mobile network, particularly in a mountainous area with rare people, so that a short-range data transmission channel is established with relay antennas of various countries.
As an embodiment of the present invention: the relay module further comprises:
an identification unit: the system comprises a scene video, a data transmission channel and a data transmission channel, wherein the scene video is used for identifying each frame of image type information in the scene video, and transmitting different types of images through different data transmission channels according to the image type of each frame of image; and the data transmission is divided into channels, so that the independence of the data is kept.
An interface unit: the system comprises a high-definition camera, a locomotive tail part and driver terminal equipment, wherein the high-definition camera is used for acquiring scene video information of the driver terminal equipment; and an independent video transmission structure is used for facilitating the synchronous and rapid transmission of scene videos.
Labeling unit: the system is used for receiving the scene video and marking the scene video; the labeling comprises the following steps: time labeling and element labeling; the purpose of the labeling is to facilitate the driver's observation.
A statistic unit: the method is used for summarizing and counting the data in the videos of all scenes to obtain the optimal data parameter field information under each data item information. The unit realizes data textualization statistics for enhancing the understanding of the driver.
As an embodiment of the present invention: the tail synchronous transmission module comprises:
a first acquisition unit: the system comprises a mobile communication network, a time marker and a time marker, wherein the mobile communication network is used for receiving a scene video transmitted by the mobile communication network and determining first time information according to the time marker in the scene video;
a second acquisition unit: the relay antenna is used for receiving the scene video transmitted by the relay antenna and determining second time information according to the time mark in the scene video;
a judging unit: and the time nodes used for respectively determining the first time phase and the second time phase correspond to each other, and after the first time node corresponds to the second time node, the scene video is synchronously transmitted to the terminal equipment of the locomotive tail driver and played.
The purpose of the first time and the second time is to keep the video to be transmitted synchronously, not only to verify the integrity of the video, but also to judge whether the video is transmitted efficiently and stably. Of course, a single communication mode transmission may also be implemented.
As an embodiment of the present invention: the tail synchronous transmission module further comprises:
an abnormality determination unit: the CPU processor is used for carrying out video integrity detection on the scene video through the relay antenna and determining an abnormal value according to a detection result;
the integrity detection is based on video data autocorrelation, and the abnormal value is calculated by taking a time factor into consideration mainly.
A space calling unit: the method comprises the steps of obtaining three-dimensional relations between analysis angular difference, specific difference and medium loss of a scene video in any time period and the scene video;
the angular difference is the difference of the shooting angles of the shooting equipment and the ground when the equipment goes uphill, downhill or on a plane, and the specific difference is the difference of videos transmitted in different communication modes. The medium loss is an error caused by self-interference of a device when a video is transmitted through a medium such as a relay antenna, and the error is generally a time error and a definition error of a picture.
A trend prediction unit: the method is used for predicting the trend of the scene video according to the angular difference, the specific difference and the medium loss of the scene video; the trend prediction predicts the next operational mode of the vehicle.
A transmission loss judgment unit: the relay antenna is used for calculating electric quantity according to the voltage and current values of the relay antenna and determining transmission loss; the transmission loss is mainly a loss due to interference when transmission is performed through a circuit. It can be obtained by power calculation.
A correction unit: and the scene video transmitted to the terminal equipment of the locomotive tail driver is corrected according to the transmission loss.
As an embodiment of the present invention: the abnormality determination unit determining an abnormal value includes the steps of:
step 1: acquiring a time mark of a scene video, and establishing a time sequence chart of the scene video; wherein the content of the first and second substances,
the abscissa of the timing diagram is time; the ordinate of the timing diagram is a video feature;
the time sequence diagram is a diagram of event development of scene video through time sequence.
Step 2: according to the sequence diagram, establishing an autocorrelation function of the scene video:
Figure BDA0003326132200000141
wherein, CkAn autocorrelation function representing a scene video; a istRepresenting the video characteristics of the scene video at the time t; a ist-kRepresenting the video characteristics of a scene video at the time t-k, wherein k belongs to N and is a time independent variable; cov denotes covariance; da (Da)tRepresenting scene videoAt the time t, the integrity of the scene video; da (Da)t-kRepresenting the integrity of the scene video at the time t-k;
the purpose of the autocorrelation function is to perform abnormal self-check, and since the video characteristics at different moments and the integrality of the video at different moments have a corresponding relation, the loss error of the scene video can be calculated through autocorrelation.
And step 3: according to the autocorrelation function, determining an abnormal value of the scene video:
Figure BDA0003326132200000151
wherein Y represents an abnormal value; when the autocorrelation function is 1, the outlier is also 1; when the autocorrelation function is greater than 1, the outlier is also greater than 1.
When the abnormal value is calculated, since it is calculated regardless of how the video is transmitted, there must be an abnormality. And an abnormal threshold value is set during actual operation, so that abnormal alarm is realized. The purpose of this step is only to calculate the outliers.
As an embodiment of the present invention: the correction unit corrects the scene video, and comprises the following steps:
step S1: acquiring the scene video, and determining the root mean square of the scene video:
Figure BDA0003326132200000152
wherein S represents the root mean square of the scene video; t represents the shooting time of the scene video;
Figure BDA0003326132200000154
a feature mean representing a scene video;
the root mean square represents the characteristics of the scene video in the shooting time, and the purpose of the calculation of the invention is to calculate the error of the transmission loss of the root mean square of the scene video to the root mean square after the shooting is finished.
Step S2: determining a compensation value of the scene video according to the root mean square and the transmission loss by the following formula:
Figure BDA0003326132200000153
wherein g (a)tQ) represents the relevance of the scene video and the transmission loss at the time t; q represents a loss parameter of transmission loss; y istRepresenting the integrity of video elements of the scene video at the time t;
in step 2, the compensation value of the scene video is calculated,
Figure BDA0003326132200000161
indicating the error value of i, so subtracting the error value by 1 is the offset.
Step S3: and according to the compensation value, correcting the original scene video:
Figure BDA0003326132200000162
wherein Z denotes a compensation correction value of the accumulated sum of the corrected scene videos.
And when the scene is corrected, the scene is repaired only through the compensation value. The integrity and the reasonability of the corrected scene video can be maintained, the scene video cannot be changed too much, and the experience of a driver is improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A rail locomotive beyond visual range image picture transmission system is characterized by comprising:
the image acquisition module: the system comprises a locomotive head part, a high-definition image camera, a scene video and a time mark, wherein the high-definition image camera is preset at the locomotive head part and is used for shooting a scene of the running direction of a train, generating a scene video and carrying out time mark on the scene video;
a relay transmission module: the system comprises a high-definition data acquisition module, a tail synchronous transmission module, a mobile communication network and a train, wherein the high-definition data acquisition module is used for acquiring high-definition data of a locomotive head and the tail synchronous transmission module;
the tail synchronous transmission module: the system comprises a mobile communication network, a relay antenna, a time marker acquisition module, a terminal device and a display module, wherein the mobile communication network is used for transmitting a scene video to the terminal device of a locomotive tail driver, the time marker acquisition module is used for respectively acquiring the time marker of the scene video transmitted by the mobile communication network and the time marker of the scene video transmitted by the relay antenna, transmitting the scene video subjected to time marking to the terminal device of the locomotive tail driver, and playing the scene video when the time markers are the same.
2. The system according to claim 1, wherein the image capturing module comprises:
a deployment unit: the system comprises a camera combination unit, a camera unit and a control unit, wherein the camera combination unit is used for combining a plurality of cameras arranged at the head of a train into at least one shooting unit in a mode that at least two cameras form one camera combination;
a collecting unit: acquiring a two-dimensional image map acquired by a photographing unit in each acquisition period, and combining a current camera with a plurality of frames of two-dimensional image maps acquired in sequence in the current acquisition period;
an image processing unit: the foreground extraction module is used for extracting the foreground from the multi-frame two-dimensional image map to obtain a plurality of two-dimensional foreground images; projecting the two-dimensional foreground images into a three-dimensional space to obtain three-dimensional foreground images; the method comprises the steps that a plurality of frames of two-dimensional image maps, a plurality of two-dimensional foreground images and a plurality of three-dimensional foreground images are acquired at the same time in a current acquisition cycle;
the calculating unit is used for calculating the change condition of the corresponding position of the no-frame image in the scene video in the plurality of two-dimensional foreground images; the point in the scene video is a corresponding point of each two-dimensional foreground point in the two-dimensional foreground image in the scene video; the two-dimensional foreground points are pixel points in the two-dimensional foreground image;
an image change unit: the system comprises a plurality of two-dimensional foreground images, a plurality of three-dimensional foreground images and a plurality of image processing units, wherein the three-dimensional foreground images are used for acquiring a plurality of three-dimensional foreground images corresponding to points in a scene video;
a three-dimensional fusion unit: the system comprises a video acquisition unit, a scene fusion unit, a monitoring scene acquisition unit and a fusion processing unit, wherein the video acquisition unit is used for performing fusion processing on three-dimensional foreground points in three-dimensional foreground images corresponding to the combination of cameras in the monitoring scene at the same acquisition time according to a rule that all the three-dimensional foreground points corresponding to the same point in the scene video are fused into one three-dimensional fusion foreground point at the current acquisition time, and all the three-dimensional fusion foreground points obtained after fusion are combined together to form a three-dimensional fusion foreground image at the acquisition time; the three-dimensional foreground point is a pixel point of the three-dimensional foreground image;
a three-dimensional change unit: the system comprises a plurality of three-dimensional fusion foreground images, a plurality of image acquisition devices and a plurality of image acquisition devices, wherein the image acquisition devices are used for acquiring a plurality of three-dimensional fusion foreground images of a scene video;
a scene distribution unit: the method is used for determining a target object and determining the spatial distribution characteristic and the operation distribution characteristic of the target object according to the plurality of three-dimensional fusion foreground images of each acquisition time of the current acquisition period and the change condition of the corresponding position of the point in the scene video in the plurality of three-dimensional fusion foreground images of each acquisition time of the current acquisition period.
3. The system according to claim 1, wherein the image capturing module comprises:
panorama playback unit: the high-definition image camera is used for synchronously acquiring 360-degree panoramic images;
an image mapping unit: the system is used for carrying out space mapping on the 360-degree panoramic image and converting the 360-degree panoramic image into space data; which is a mixture of a plurality of kinds of the rice,
the mapping is mapped by:
Figure FDA0003326132190000031
Figure FDA0003326132190000032
wherein H is the height of the scene of the image scene video, f is the normalized focal length of the high-definition image camera,
Figure FDA0003326132190000033
the inclination angle of the high-definition image camera to the horizontal plane is set; alpha is the abscissa of the image coordinate; v is the ordinate of the image coordinate; x is the abscissa of the actual coordinate; y is the ordinate of the actual coordinate.
4. The system according to claim 1, wherein the image capturing module comprises:
a dynamic data acquisition module: for establishing a unique data transmission channel for rail vehicles and high-definition video cameras, wherein,
a dynamic data transmission rule is arranged between the high-definition image camera and the rail vehicle, and the high-definition image camera updates the scene video in real time according to the dynamic data transmission rule;
a data verification module: the method is used for acquiring real-time scene dynamic change data through a high-definition video camera and verifying whether the scene video is correct or not through the real-time scene dynamic change data.
5. The system according to claim 1, wherein the relay module comprises:
a mobile communication unit: a mobile network transmission channel for constructing the scene video transmission through a mobile communication network;
the intermediate transmission unit: and the relay transmission channel is used for constructing the scene video transmission through a plurality of groups of relay antennas arranged in an array manner.
6. The railroad locomotive beyond-the-horizon image graph transmission system of claim 1, wherein the relay module further comprises:
an identification unit: the system comprises a scene video, a data transmission channel and a data transmission channel, wherein the scene video is used for identifying each frame of image type information in the scene video, and transmitting different types of images through different data transmission channels according to the image type of each frame of image;
an interface unit: the system comprises a high-definition camera, a locomotive tail part and driver terminal equipment, wherein the high-definition camera is used for acquiring scene video information of the driver terminal equipment;
labeling unit: the system is used for receiving the scene video and marking the scene video; the labeling comprises the following steps: time labeling and element labeling;
a statistic unit: the method is used for summarizing and counting the data in the videos of all scenes to obtain the optimal data parameter field information under each data item information.
7. The system according to claim 1, wherein the tail synchronous transmission module comprises:
a first acquisition unit: the system comprises a mobile communication network, a time marker and a time marker, wherein the mobile communication network is used for receiving a scene video transmitted by the mobile communication network and determining first time information according to the time marker in the scene video;
a second acquisition unit: the relay antenna is used for receiving the scene video transmitted by the relay antenna and determining second time information according to the time mark in the scene video;
a judging unit: and the time nodes used for respectively determining the first time phase and the second time phase correspond to each other, and after the first time node corresponds to the second time node, the scene video is synchronously transmitted to the terminal equipment of the locomotive tail driver and played.
8. The system according to claim 1, wherein the tail synchronous transmission module further comprises:
an abnormality determination unit: the CPU processor is used for carrying out video integrity detection on the scene video through the relay antenna and determining an abnormal value according to a detection result;
a space calling unit: the method comprises the steps of obtaining three-dimensional relations between analysis angular difference, specific difference and medium loss of a scene video in any time period and the scene video;
a trend prediction unit: the method is used for predicting the trend of the scene video according to the angular difference, the specific difference and the medium loss of the scene video;
a transmission loss judgment unit: the relay antenna is used for calculating electric quantity according to the voltage and current values of the relay antenna and determining transmission loss;
a correction unit: and the scene video transmitted to the terminal equipment of the locomotive tail driver is corrected according to the transmission loss.
9. The system of claim 8, wherein the abnormality determining unit determines the abnormal value by the following steps:
step 1: acquiring a time mark of a scene video, and establishing a time sequence chart of the scene video; wherein the content of the first and second substances,
the abscissa of the timing diagram is time; the ordinate of the timing diagram is a video feature;
step 2: according to the sequence diagram, establishing an autocorrelation function of the scene video:
Figure FDA0003326132190000051
wherein, CkAn autocorrelation function representing a scene video; a istRepresenting the video characteristics of the scene video at the time t; a ist-kRepresenting the video characteristics of a scene video at the time t-k, wherein k belongs to N and is a time independent variable; cocv denotes covariance; da (Da)tRepresenting the integrity of the scene video at the time t; da (Da)t-kRepresenting the integrity of the scene video at the time t-k;
and step 3: according to the autocorrelation function, determining an abnormal value of the scene video:
Figure FDA0003326132190000061
wherein Y represents an abnormal value; when the autocorrelation function is 1, the outlier is also 1; when the autocorrelation function is greater than 1, the outlier is also greater than 1.
10. The system according to claim 8, wherein the correction unit corrects the scene video, comprising the steps of:
step S1: acquiring the scene video, and determining the root mean square of the scene video:
Figure FDA0003326132190000062
wherein S represents the root mean square of the scene video; t represents the time constant of the scene video;
Figure FDA0003326132190000064
a feature mean representing a scene video;
step S2: determining a compensation value of the scene video according to the root mean square and the transmission loss by the following formula:
Figure FDA0003326132190000063
wherein g (a)tQ) represents the relevance of the scene video and the transmission loss at the time t; q represents a loss parameter of transmission loss; y istRepresenting the integrity of video elements of the scene video at the time t;
step S3: and according to the compensation value, correcting the original scene video:
Figure FDA0003326132190000071
wherein Z denotes a compensation correction value of the accumulated sum of the corrected scene videos.
CN202111262093.1A 2021-10-28 2021-10-28 Over-sight distance image transmission system for rail locomotive Active CN113891048B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111262093.1A CN113891048B (en) 2021-10-28 2021-10-28 Over-sight distance image transmission system for rail locomotive

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111262093.1A CN113891048B (en) 2021-10-28 2021-10-28 Over-sight distance image transmission system for rail locomotive

Publications (2)

Publication Number Publication Date
CN113891048A true CN113891048A (en) 2022-01-04
CN113891048B CN113891048B (en) 2022-11-15

Family

ID=79013832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111262093.1A Active CN113891048B (en) 2021-10-28 2021-10-28 Over-sight distance image transmission system for rail locomotive

Country Status (1)

Country Link
CN (1) CN113891048B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456048A (en) * 2023-06-19 2023-07-18 中汽信息科技(天津)有限公司 Automobile image recording method and system based on scene adaptation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006250917A (en) * 2005-02-14 2006-09-21 Kazuo Iwane High-precision cv arithmetic unit, and cv-system three-dimensional map forming device and cv-system navigation device provided with the high-precision cv arithmetic unit
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device
JP6312944B1 (en) * 2017-02-22 2018-04-18 三菱電機株式会社 Driving support device, map providing device, driving support program, map providing program, and driving support system
CN109246416A (en) * 2018-09-21 2019-01-18 福州大学 The panorama mosaic method of vehicle-mounted six road camera
CN109435852A (en) * 2018-11-08 2019-03-08 湖北工业大学 A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN111415416A (en) * 2020-03-31 2020-07-14 武汉大学 Method and system for fusing monitoring real-time video and scene three-dimensional model
KR20200095380A (en) * 2019-01-31 2020-08-10 주식회사 스트라드비젼 Method and device for delivering steering intention of autonomous driving module or driver to steering apparatus of subject vehicle more accurately

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006250917A (en) * 2005-02-14 2006-09-21 Kazuo Iwane High-precision cv arithmetic unit, and cv-system three-dimensional map forming device and cv-system navigation device provided with the high-precision cv arithmetic unit
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device
JP6312944B1 (en) * 2017-02-22 2018-04-18 三菱電機株式会社 Driving support device, map providing device, driving support program, map providing program, and driving support system
CN109246416A (en) * 2018-09-21 2019-01-18 福州大学 The panorama mosaic method of vehicle-mounted six road camera
CN109435852A (en) * 2018-11-08 2019-03-08 湖北工业大学 A kind of panorama type DAS (Driver Assistant System) and method for large truck
KR20200095380A (en) * 2019-01-31 2020-08-10 주식회사 스트라드비젼 Method and device for delivering steering intention of autonomous driving module or driver to steering apparatus of subject vehicle more accurately
CN111415416A (en) * 2020-03-31 2020-07-14 武汉大学 Method and system for fusing monitoring real-time video and scene three-dimensional model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116456048A (en) * 2023-06-19 2023-07-18 中汽信息科技(天津)有限公司 Automobile image recording method and system based on scene adaptation
CN116456048B (en) * 2023-06-19 2023-08-18 中汽信息科技(天津)有限公司 Automobile image recording method and system based on scene adaptation

Also Published As

Publication number Publication date
CN113891048B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
JP6494103B2 (en) Train position detection system using image processing and train position and environment change detection system using image processing
CN104036490B (en) Foreground segmentation method suitable for mobile communications network transmission
CN107851125A (en) The processing of two step object datas is carried out by vehicle and server database to generate, update and transmit the system and method in accurate road characteristic data storehouse
CN107850672A (en) System and method for accurate vehicle positioning
CN107850453A (en) Road data object is matched to generate and update the system and method for accurate transportation database
CN108196285A (en) A kind of Precise Position System based on Multi-sensor Fusion
CN110068814B (en) Method and device for measuring distance of obstacle
CN111024115A (en) Live-action navigation method, device, equipment, storage medium and vehicle-mounted multimedia system
CN113891048B (en) Over-sight distance image transmission system for rail locomotive
US20200302191A1 (en) Road map generation system and road map generation method
CN110736472A (en) indoor high-precision map representation method based on fusion of vehicle-mounted all-around images and millimeter wave radar
US11398054B2 (en) Apparatus and method for detecting fog on road
CN112687103A (en) Vehicle lane change detection method and system based on Internet of vehicles technology
CN111835998B (en) Beyond-the-horizon panoramic image acquisition method, device, medium, equipment and system
CN113011252B (en) Rail foreign matter intrusion detection system and method
CN113591643A (en) Underground vehicle station entering and exiting detection system and method based on computer vision
CN110517251B (en) Scenic spot area overload detection and early warning system and method
CN112550377A (en) Rail transit emergency positioning method and system based on video identification and IMU (inertial measurement Unit) equipment
CN109874099B (en) Networking vehicle-mounted equipment flow control system
CN113561897B (en) Method and system for judging ramp parking position of driving test vehicle based on panoramic all-round view
CN117291864A (en) Visibility estimating device and method, and recording medium
CN114910085A (en) Vehicle fusion positioning method and device based on road administration facility identification
CN111179355A (en) Binocular camera calibration method combining point cloud and semantic recognition
CN112001970A (en) Monocular vision odometer method based on point-line characteristics
WO2021153068A1 (en) Facility diagnosis system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Beyond Line of Sight Image Transmission System for Rail Locomotives

Effective date of registration: 20231017

Granted publication date: 20221115

Pledgee: Nantong Branch of Bank of Nanjing Co.,Ltd.

Pledgor: JIANGSU HAOHAN INFORMATION TECHNOLOGY Co.,Ltd.

Registration number: Y2023980061462