CN116112475A - Image transmission method for automatic driving remote take-over and vehicle-mounted terminal - Google Patents

Image transmission method for automatic driving remote take-over and vehicle-mounted terminal Download PDF

Info

Publication number
CN116112475A
CN116112475A CN202211444829.1A CN202211444829A CN116112475A CN 116112475 A CN116112475 A CN 116112475A CN 202211444829 A CN202211444829 A CN 202211444829A CN 116112475 A CN116112475 A CN 116112475A
Authority
CN
China
Prior art keywords
image
frame rate
view
vehicle
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211444829.1A
Other languages
Chinese (zh)
Inventor
黎国伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepRoute AI Ltd
Original Assignee
DeepRoute AI Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepRoute AI Ltd filed Critical DeepRoute AI Ltd
Priority to CN202211444829.1A priority Critical patent/CN116112475A/en
Publication of CN116112475A publication Critical patent/CN116112475A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an image transmission method and a vehicle-mounted terminal for automatic driving remote take-over, wherein the method comprises the following steps: acquiring images of all view angles of a vehicle and speed data of the vehicle, and determining a basic frame rate corresponding to the images of all view angles based on the speed data; respectively acquiring content variation degrees of the images of each view angle, respectively adjusting the basic frame rate based on the content variation degrees to obtain a target frame rate, and determining spliced images to be transmitted; and acquiring the motion behavior information of the vehicle, adjusting the definition of the spliced image, and carrying out image coding and transmission on the adjusted spliced image. The invention can adjust the frame rate of each view angle image according to the content change degree of each view angle image and adjust the definition according to the motion behavior information of the vehicle, thereby not only reducing the transmission delay of the image data, but also ensuring the definition of the transmitted image data.

Description

Image transmission method for automatic driving remote take-over and vehicle-mounted terminal
Technical Field
The invention relates to the technical field of image data transmission, in particular to an image transmission method for automatic driving remote take-over and a vehicle-mounted terminal.
Background
In automatic driving, the vehicle may encounter a fault of a hardware system of the vehicle caused by unreliability in an actual running process, so that automatic driving cannot be continued. In this case, the vehicle needs to take over remotely, and the remote take over transmits the current camera image of the vehicle to the remote end through the network, and the remote end takes over artificially to control the vehicle to stop nearby or leave the fault location. The transmission delay of the image data and the sharpness of the image need to be considered in this process. The image data collected by each camera on the vehicle is larger, and the image content of the vehicle is continuously changed in the running process, so that the problem that the transmission delay is difficult to reduce in the prior art when the image transmission is performed cannot be solved, and the definition of the image transmission cannot be ensured.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The invention aims to solve the technical problems that in the prior art, transmission delay is difficult to reduce when image transmission is performed, and definition of image transmission cannot be guaranteed.
In a first aspect, the present invention provides an image transmission method for automated driving remote takeover, wherein the method comprises:
acquiring images of all view angles of a vehicle and speed data of the vehicle, and determining a basic frame rate corresponding to the images of all view angles based on the speed data;
respectively acquiring content variation degrees of the view images, respectively adjusting basic frame rates corresponding to the view images based on the content variation degrees to obtain target frame rates corresponding to the view images, and determining spliced images to be transmitted based on the target frame rates;
and acquiring the movement behavior information of the vehicle, adjusting the definition of the spliced image based on the movement behavior information, and carrying out image coding and transmission on the adjusted spliced image.
In one implementation manner, the determining, based on the speed data, a base frame rate corresponding to the splicing of the respective view images includes:
comparing the speed data with a preset speed threshold;
if the speed data is smaller than the speed threshold, acquiring a preset first linear relation, wherein the first linear relation is used for reflecting the corresponding relation between the speed data and the basic frame rate;
And determining the basic frame rate according to the first linear relation.
In one implementation manner, the determining, based on the speed data, a base frame rate corresponding to the splicing of the respective view images includes:
and if the speed data is larger than the speed threshold, acquiring a preset first frame rate value, and taking the first frame rate value as the basic frame rate.
In one implementation, the separately obtaining the content variation degree of each view image includes:
acquiring current contour data corresponding to a current frame of each view angle image;
acquiring historical contour data corresponding to a previous frame of each view angle image;
and determining the content change degree according to the current contour data and the historical contour data, wherein the content change degree is used for reflecting the contour difference degree between the current frame and the last frame of each visual angle image.
In one implementation manner, the adjusting the base frame rate corresponding to each view image based on the content change degree to obtain the target frame rate corresponding to each view image includes:
acquiring a preset first profile difference threshold and a preset second profile difference threshold, and comparing the content change degree with the first profile difference threshold and the second profile difference threshold respectively, wherein the first profile difference threshold is smaller than the second profile difference threshold;
If the content change degree of any one of the view images is larger than the first contour difference threshold and smaller than the second contour threshold, acquiring a preset second linear relation, wherein the second linear relation is used for reflecting the corresponding relation between the content change degree and the target frame rate;
and according to the second linear relation, adjusting the reference frame rate corresponding to the visual angle image to obtain the target frame rate of the visual angle image.
In one implementation manner, the adjusting the base frame rate corresponding to each view image based on the content change degree to obtain the target frame rate corresponding to each view image includes:
if the content change degree of any one of the view images is smaller than the first contour difference threshold value, setting a target frame rate corresponding to the view image as a second frame rate value;
and if the content change degree of any one of the view images is larger than the second contour difference threshold value, setting the target frame rate corresponding to the view image as a third frame rate value.
In one implementation, the determining, based on the target frame rate, a stitched image to be transmitted includes:
Determining a splicing mode of each view angle image;
and respectively overlapping three view angle images in each view angle image after the frame rate adjustment to the rest view angle images based on the stitching mode to obtain the stitching image.
In one implementation, the adjusting the sharpness of the image to be transmitted based on the athletic performance information includes:
if the movement behavior information is that the vehicle is backward, reducing the compression rate of the back view angle image in the spliced image;
if the movement behavior information is that the vehicle advances, the compression rate of the front view image in the spliced image is reduced;
if the movement behavior information is that the vehicle turns left, the compression ratio of the left front view angle image and the left rear view angle image in the spliced image is reduced;
and if the movement behavior information is that the vehicle turns right, reducing the compression ratio of the right front view image and the right rear view image in the spliced image.
In a second aspect, an embodiment of the present invention further provides an image transmission apparatus for automated driving remote take over, wherein the apparatus includes:
the system comprises a basic frame rate determining module, a speed determining module and a speed determining module, wherein the basic frame rate determining module is used for acquiring images of all view angles of a vehicle and speed data of the vehicle and determining basic frame rates corresponding to the images of all view angles based on the speed data;
The base frame rate adjustment module is used for respectively acquiring the content change degree of each view angle image, respectively adjusting the base frame rate corresponding to each view angle image based on the content change degree to obtain a target frame rate corresponding to each view angle image, and determining a spliced image to be transmitted based on the target frame rate;
the definition adjusting module is used for acquiring the movement behavior information of the vehicle, adjusting the definition of the spliced image based on the movement behavior information, and carrying out image coding and transmission on the adjusted spliced image.
In a third aspect, an embodiment of the present invention further provides a vehicle-mounted terminal, where the vehicle-mounted terminal is a commercial display terminal or a screen-projection terminal, and the vehicle-mounted terminal includes a memory, a processor, and an image transmission program for remote take over of autopilot, where the image transmission program for remote take over of autopilot is stored in the memory and is executable on the processor, and when the processor executes the image transmission program for remote take over of autopilot, the processor implements the steps of the image transmission method for remote take over of autopilot in any one of the above schemes.
In a sixth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores an image transmission program for remote take over for automatic driving, and when the image transmission program for remote take over for automatic driving is executed by a processor, the steps of the image transmission method for remote take over for automatic driving according to any one of the above schemes are implemented.
The beneficial effects are that: compared with the prior art, the invention provides an image transmission method for automatic driving remote take-over, which comprises the following steps: acquiring images of all view angles of a vehicle and speed data of the vehicle, and determining a basic frame rate corresponding to the images of all view angles based on the speed data; respectively acquiring content variation degrees of the view images, respectively adjusting basic frame rates corresponding to the view images based on the content variation degrees to obtain target frame rates corresponding to the view images, and determining spliced images to be transmitted based on the target frame rates; and acquiring the movement behavior information of the vehicle, adjusting the definition of the spliced image based on the movement behavior information, and carrying out image coding and transmission on the adjusted spliced image. The invention can adjust the frame rate of each view angle image according to the content change degree of each view angle image so as to reduce the data quantity required to be transmitted and reduce the transmission delay of image data. In addition, the invention can also adjust the definition of each view angle image according to the movement behavior information of the vehicle so as to ensure the definition of the transmitted spliced image.
Drawings
Fig. 1 is a schematic diagram of image areas of each view angle collected by a vehicle camera in an image transmission method for automatic driving remote take-over according to an embodiment of the present invention.
Fig. 2 is a flowchart of a specific implementation of an image transmission method for automatic driving remote take over according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a stitched image in an image transmission method for automatic driving remote takeover according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a first linear relationship when determining a base frame rate in an image transmission method for remote take-over of automatic driving according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a second linear relationship when the basic frame rate is adjusted in the image transmission method for remote take-over of automatic driving according to the embodiment of the present invention.
Fig. 6 is a schematic functional block diagram of an image transmission device for remote take over of automatic driving according to an embodiment of the present invention.
Fig. 7 is a schematic block diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment provides an image transmission method for automatic driving remote take-over, and when the method is applied specifically, the method firstly obtains images of all view angles of a vehicle and speed data of the vehicle, and determines a base frame rate corresponding to the images of all view angles based on the speed data. Because the change degrees of the visual angle images collected by the vehicle camera are different at different speeds, in order to reduce the data volume of image transmission and avoid transmission delay, the embodiment respectively obtains the content change degrees of the visual angle images, respectively adjusts the basic frame rate corresponding to each visual angle image based on the content change degrees to obtain the target frame rate corresponding to each visual angle image, and determines the spliced image to be transmitted based on the target frame rate. At this time, since the frame rate of the view angle images has been adjusted and the stitched image to be transmitted is the stitching of the respective view angle images, the amount of data of the stitched image to be transmitted at this time is relatively small. In order to ensure the definition of the image to be transmitted, the embodiment acquires the movement behavior information of the vehicle, adjusts the definition of the spliced image based on the movement behavior information, and performs image coding and transmission on the adjusted spliced image. Therefore, the invention can splice the images of each view angle of the vehicle, and adjust the frame rate of the images of each view angle according to the content change degree of the images of each view angle, so as to reduce the data quantity required to be transmitted and reduce the transmission delay of the image data. In addition, the invention can also adjust the definition of each view angle image according to the movement behavior information of the vehicle so as to ensure the definition of the transmitted spliced image.
For example, the vehicle of the present embodiment is provided with six cameras, which are respectively disposed at different positions of the vehicle, as shown in fig. 1, and are respectively located in six directions of the front, rear, front left, rear left, front right and rear right of the vehicle, so that the six cameras can respectively acquire the front view image, the rear view image, the front left view image, the rear left view image, the front right view image and the rear right view image. In order to reduce the amount of data transmitted by the image, the present embodiment acquires current speed data of the vehicle, and determines a base frame rate of each view image according to the acquired speed data, where the base frame rate is an initial frame rate updated for each view image, for example, the base frame rate is determined to be 20fps. Because the vehicle is running continuously, the content of the view angle images collected by each camera is also changing continuously, in order to reduce the amount of image transmission data, in this embodiment, the content change degrees of the six view angle images are obtained respectively, for example, when the content change degree of the left front view angle image is too large, the base frame rate of the left front view angle image is adjusted to the target frame rate, so that the left front view angle image can be updated, all view angle images are adjusted in frame rate according to the above manner, and then each view angle image after the frame rate adjustment is spliced, so that the spliced image to be transmitted can be obtained. Then, the embodiment may further obtain movement behavior information of the vehicle, for example, when the movement behavior information of the vehicle is a left turn, the definition of the corresponding view angle image may be adjusted, so as to change the overall definition of the stitched image, and finally the embodiment may perform image encoding and transmission on the adjusted stitched image. Thus, the data volume of image transmission is reduced, and the definition of the image transmission is ensured.
Exemplary method
The image transmission method for automatic driving remote take-over of the embodiment can be applied to a vehicle-mounted terminal, and the vehicle-mounted terminal is an intelligent control terminal or an intelligent control platform arranged on a vehicle. As shown in fig. 2, the image transmission method for automatic driving remote take-over of the present embodiment specifically includes the steps of:
step S100, obtaining images of all view angles of a vehicle and speed data of the vehicle, and determining a base frame rate corresponding to the images of all view angles based on the speed data.
In order to reduce the data volume of image transmission, the embodiment firstly acquires images of each view angle acquired by each camera on the vehicle, then, the vehicle-mounted terminal of the embodiment can acquire the speed data of the vehicle, and as the vehicle is in different speeds, the change speeds of the image contents acquired by the cameras are different, once the image contents are changed, the images of each view angle need to be updated, so the frame rate of the image update needs to be adaptively adjusted to reduce the data volume of the image transmission. According to the embodiment, the base frame rate corresponding to each view angle image in the spliced image can be determined according to the acquired speed data.
In one implementation, the present embodiment includes the following steps when acquiring the base frame rate:
step S101, comparing the speed data with a preset speed threshold;
step S102, if the speed data is smaller than the speed threshold, acquiring a preset first linear relation, wherein the first linear relation is used for reflecting the corresponding relation between the speed data and the basic frame rate;
step S103, determining the basic frame rate according to the first linear relation.
Step S104, if the speed data is larger than the speed threshold, acquiring a preset first frame rate value, and taking the first frame rate value as the basic frame rate.
Specifically, in this embodiment, the frame rate of each camera is 60fps, and for convenience of stitching, in this embodiment, 50% of the frames of each view angle image may be stitched first, so that the frame rate of the stitched image is 30fps, and 30fps is the original frame rate. Next, the present embodiment obtains speed data of the vehicle and a preset speed threshold value, and compares the speed data with the preset speed threshold value. If the speed data is smaller than the speed threshold, the vehicle is not overspeed at the moment, and the higher the speed per hour of the vehicle is, the higher the frame rate of each view image in the spliced image is. Therefore, the present embodiment may obtain a preset first linear relationship, where the first linear relationship is used to reflect a corresponding relationship between the speed data and the base frame rate, and the first linear relationship is an increasing function, and the corresponding base frame rate may be determined based on the first linear relationship. And if the speed data is greater than the speed threshold, the vehicle is overspeed at this time, and in order to avoid an increase in the amount of data transmitted by the image, the present embodiment obtains a preset first frame rate value, and takes the first frame rate value as the base frame rate. That is, in the present embodiment, when the vehicle is overspeed, the base frame rate is a fixed value at this time, so that it is ensured that the amount of data transmitted by the image is not excessively large.
For example, as shown in fig. 4, the speed threshold value of the present embodiment is 15km/h, and as can be seen from fig. 1, if the speed data of the vehicle is less than 15km/h, the basic frame rate is determined according to the first linear relationship in fig. 4, when the speed data is 0, the vehicle is stopped, the basic frame rate is 10fbs, when the speed data is 15km/h, the speed data reaches the speed threshold value, so the basic frame rate is 40fbs at this time, when the speed data continuously increases, the basic frame rate is the first frame rate, and the first frame rate is the maximum value calculated based on the first linear relationship when the speed data is 15 km/h. Therefore, in this embodiment, when the speed data is less than 15km/h, the base frame rate is varied in the range of 10-40 fbs, and when the speed data exceeds 15km/h, the base frame rate is a fixed value of 40fbs.
Step 200, respectively obtaining content variation degrees of the view images, respectively adjusting base frame rates corresponding to the view images based on the content variation degrees to obtain target frame rates corresponding to the view images, and determining spliced images to be transmitted based on the target frame rates.
As the vehicle travels, the image content of the view angle images acquired by each camera is changed, once the image content is changed, the corresponding view angle images are updated, compression is performed according to the updated images during video compression, and then image transmission is performed, and if z has a part of view angle images in the subsequent spliced images and is updated at a lower frame rate, the data volume of video coding can be effectively reduced. Therefore, the embodiment can respectively obtain the content variation degree of each view angle image, respectively adjust the base frame rate corresponding to each view angle image based on the content variation degree to obtain the target frame rate corresponding to each view angle image, at this time, the target frame rate is smaller than the base frame rate, that is, the base frame rate is reduced, when each view angle image is adjusted in frame rate, the embodiment can splice the view angle images after the frame rate adjustment, so that a spliced image to be transmitted is obtained, at this time, the base frame rate of one or some view angle images in the spliced image is reduced, and therefore, the data volume during subsequent video coding can be obviously reduced.
In one implementation, the method includes the following steps when acquiring the content variation degree of each view angle image:
step S201, current contour data corresponding to the current frame of each view angle image are obtained;
step S202, acquiring historical contour data corresponding to a previous frame of each view angle image;
and step 203, determining the content change degree according to the current contour data and the historical contour data, wherein the content change degree is used for reflecting the contour difference degree between the current frame and the last frame of each view angle image.
In the present embodiment, the content change degree reflects the degree of the contour difference between the current frame and the previous frame of each view angle image. And the degree of content change determines whether an adjustment of the base frame rate of the view angle image is required. In this embodiment, first, current contour data corresponding to a current frame of each view angle image is obtained, where the current contour data is contour data of content in the view angle image, such as a street view beside a road. And then acquiring historical contour data corresponding to the last frame of each view angle image. And comparing the current contour data with the historical contour data to determine the difference between the current contour data and the historical contour data, wherein the difference reflects the contour difference degree between the current frame and the previous frame of the visual angle image, so that the content change degree can be obtained.
After determining the content change degree, the present embodiment may adjust the base frame rate corresponding to each view image based on the content change degree, and adjust the base frame rate of the corresponding view image to the target frame rate. In this embodiment, a preset first profile difference threshold and a second profile difference threshold are first obtained, where the first profile difference threshold is smaller than the second profile difference threshold. The content variation level is then compared with the first contour difference threshold and the second contour difference threshold, respectively. If the content change degree of any one of the respective view images is greater than the first contour difference threshold and less than the second contour threshold, a preset second linear relationship is obtained, as shown in fig. 5, for reflecting the correspondence between the content change degree and the target frame rate, so that the present embodiment may determine the target frame rate corresponding to the content change degree based on the second linear relationship, and then adjust the base frame rate of the view image to the target frame rate. If the content change degree of any one of the view images is smaller than the first contour difference threshold, the target frame rate corresponding to the view image is set to be a second frame rate value, and the second frame rate value is a fixed value at this time, that is, in this embodiment, if the content change program of a certain view image is smaller than the first contour difference threshold, the base frame rate of the view image needs to be adjusted to be the second frame rate value. If the content change degree of any one of the view images is greater than the second contour difference threshold, the target frame rate corresponding to the view image is set to be a third frame rate value, and the third frame rate value is also a fixed value at this time, that is, in this embodiment, if the content change program of a certain view image is greater than the second contour difference threshold, the base frame rate of the view image needs to be adjusted to be the third frame rate value.
For example, as shown in fig. 5, the content change degree of the present embodiment is represented by a percentage, the second linear relationship in fig. 5 is that y=60x+b-30, b is the base frame rate, x is the content change degree, it can be seen from fig. 5 that the second frame rate value is 10fbs, the third frame rate value is 40fbs, and the minimum and maximum values of the second linear relationship are 10fbs and 40fbs, respectively. Therefore, when the content change degree of a certain view angle image is greater than the first contour difference threshold and less than the second contour difference threshold, a corresponding target frame rate can be calculated according to y=60x+b-30, and the range of the target frame rate is 10fbs to 40fbs. And if the content variation degree of a certain view angle image is smaller than the first contour difference threshold value, the target frame rate is 10fbs, and if the content variation degree of a certain view angle image is larger than the second contour difference threshold value, the target frame rate is 40fbs.
After the base frame rate of each view angle image is adjusted, the embodiment can update each view angle image based on the target frame rate, adjust the base frame rate of each view angle image to the target frame rate, and then splice each view angle image after the frame rate adjustment, so as to obtain a spliced image to be transmitted.
In one implementation, the present embodiment may first acquire a stitching mode of each view image, and then stitch each view image based on the stitching mode. As shown in fig. 3, in the present embodiment, there are six view angle images, which are respectively a front view angle image, a back view angle image, a left front view angle image, a left back view angle image, a right front view angle image, and a right back view angle image, and when stitching, three view angle images may be sequentially arranged from left to right, for example, the left front view angle image, the front view angle image, and the right front view angle image are arranged in a row. The remaining three view images are then superimposed on the three view images, respectively, as shown in fig. 3, the left back view image may be superimposed on the left front view image, the back view image may be superimposed on the front view image, and the trial right back view image may be superimposed on the right front view image. In another implementation manner, the stitching mode may also be set and selected in this embodiment, and the stitched image shown in fig. 3 may be in the form of a default stitching mode in this embodiment, where each view angle image is displayed in a corresponding area according to a preset image size, for example, in the default stitching mode, the image sizes of the left front view image, the front view image, and the right front view image are the same, and the image sizes of the left rear view image, the rear view image, and the right rear view image are one-ninth of the image sizes of the left front view image, the front view image, and the right front view image, respectively. In this embodiment, after the respective view angle images are acquired, a corresponding stitching mode may be acquired first, and if the stitching mode is a default stitching mode, the respective view angle images may be stitched according to the form shown in fig. 3. Of course, the present embodiment may also switch the stitching mode according to the movement behavior of the vehicle. For example, the embodiment may further set a left turn splicing mode, a right turn splicing mode, a forward splicing mode, and a backward splicing mode. When the motion behavior of the vehicle is left-turn, a left-turn stitching mode may be selected, in which the left front view image is enlarged, for example, in the stitched image shown in fig. 3, if the left-turn stitching mode is switched, the left front view image is enlarged by two or three times based on the image size corresponding to the default stitching mode to be displayed. Also, when the movement behavior of the vehicle is forward, a forward stitching mode may be selected at this time, in which the forward view image is enlarged, for example, in the stitched image shown in fig. 3, if the forward stitching mode is switched, the forward view image is enlarged two or three times based on the image size corresponding to the default stitching mode to be displayed. When the motion behavior of the vehicle is right-turn, a right-turn stitching mode may be selected at this time, and in the right-turn stitching mode, the right-front view image is enlarged, for example, in the stitched image shown in fig. 3, if the right-turn stitching mode is switched, the right-front view image is enlarged by two or three times based on the image size corresponding to the default stitching mode, and displayed. When the motion behavior of the vehicle is backward, a backward stitching mode may be selected at this time, and in the backward stitching mode, the back view image is enlarged, for example, in the stitched image shown in fig. 3, if the backward stitching mode is switched, the back view image is enlarged by two or three times based on the image size corresponding to the default stitching mode, and displayed.
And step S300, acquiring the motion behavior information of the vehicle, adjusting the definition of the spliced image based on the motion behavior information, and carrying out image coding and transmission on the adjusted spliced image.
In order to ensure the definition of the spliced image, the embodiment can also adjust the definition of each view angle image in the spliced image according to the motion behavior information of the vehicle, and after the adjustment, the embodiment performs image coding and transmission on the adjusted spliced image. Thereby not only reducing the data volume of the image transmission, but also ensuring the definition of the image transmission.
In one implementation, the method for adjusting the sharpness of the stitched image includes the following steps:
step 301, if the movement behavior information is that the vehicle is backing, reducing the compression rate of the back view angle image in the spliced image;
step S302, if the movement behavior information is that the vehicle advances, the compression rate of the front view image in the spliced image is reduced;
step S303, if the movement behavior information is that the vehicle turns left, the compression ratio of the left front view image and the left rear view image in the spliced image is reduced;
And step S304, if the movement behavior information is that the vehicle turns right, reducing the compression ratio of the right front view image and the right rear view image in the spliced image.
The athletic performance information of this embodiment includes vehicle forward movement, vehicle backward movement, vehicle left turn, or vehicle right turn, and in order to ensure the definition of each view angle image in the stitched image, the definition of the corresponding view angle image may be adjusted according to the behavior of the vehicle when the vehicle is running. Specifically, if the movement behavior information is that the vehicle is backing, the compression rate of the back view image in the stitched image is reduced, and at this time, the definition of the back view image can be improved. If the movement behavior information is that the vehicle advances, the compression rate of the front view image in the image to be transmitted is reduced, and at this time, the definition of the front view image can be improved. If the movement behavior information is that the vehicle turns left, the compression rate of the left front view image and the left rear view image in the image to be transmitted is reduced, and at this time, the definition of the left front view image and the left rear view image can be improved. If the movement behavior information is that the vehicle turns right, the compression ratio of the right front view image and the right rear view image in the image to be transmitted is reduced, and at this time, the definition of the right front view image and the right rear view image can be improved. After the definition adjustment of each view angle image is completed, the embodiment can perform image coding on the spliced image with the definition adjusted to obtain video data, and then transmit the video data.
In a specific application, when the sharpness of the stitched image is adjusted, the embodiment may first stitch the view angle image in the stitched image, where the compression ratio does not need to be adjusted, and then separately reduce the compression ratio of the view angle image to be focused (the view angle image to be focused is determined based on the motion behavior of the vehicle, for example, when the motion behavior of the vehicle is backward, the view angle image to be focused is a back view angle image), and after the compression ratio is reduced, stitch the view angle image to be focused with other view angle images, thereby completing the sharpness adjustment of the stitched image. In another implementation manner, the embodiment may further obtain the original compression rate of each view angle image in the stitched image, then select the lowest original compression rate, then, the embodiment may adjust the original compression rate of each view angle image to the lowest compression rate, so that the definition of each view angle image may be initially adjusted once, then, the embodiment further reduces the compression rate of the view angle image to be focused separately, so that the compression rate of the view angle image to be focused can be ensured to meet the requirement, and finally, all view angle images are stitched together, so that the definition adjustment of the stitched image is completed. Of course, in other implementation manners, in this embodiment, after the base frame rate of each view angle image is adjusted to the target frame rate, the compression rate of the view angle image to be focused is directly reduced, and after the compression rate is reduced, all view angle images can be directly spliced to obtain a spliced image, and at this time, the spliced image completes both frame rate adjustment and definition adjustment. When the embodiment uses the spliced image subjected to frame rate adjustment and sharpness adjustment to perform image coding, the obtained video data has smaller data size, is beneficial to realizing the transmission of the image data, and does not influence the sharpness of the image.
In summary, the present embodiment first obtains images of each view angle of a vehicle and speed data of the vehicle, and determines a base frame rate corresponding to each view angle image based on the speed data. Because the change degrees of the visual angle images collected by the vehicle camera are different at different speeds, in order to reduce the data volume of image transmission and avoid transmission delay, the embodiment respectively obtains the content change degrees of the visual angle images, respectively adjusts the basic frame rate corresponding to each visual angle image based on the content change degrees to obtain the target frame rate corresponding to each visual angle image, and determines the spliced image to be transmitted based on the target frame rate. At this time, since the frame rate of the view angle images has been adjusted and the stitched image to be transmitted is again the stitching of the respective view angle images, the data amount of the stitched image at this time is relatively small. In order to ensure the definition of the spliced image, the embodiment acquires the movement behavior information of the vehicle, adjusts the definition of the image to be transmitted based on the movement behavior information, and performs image coding and transmission on the adjusted spliced image. Therefore, the invention can splice the images of each view angle of the vehicle, and adjust the frame rate of the images of each view angle according to the content change degree of the images of each view angle, so as to reduce the data quantity required to be transmitted and reduce the transmission delay of the image data. In addition, the invention can also adjust the definition of the images at each view angle according to the movement behavior information of the vehicle so as to ensure the definition of the transmitted image data.
Exemplary apparatus
Based on the above embodiment, the present invention also provides an image transmission apparatus for automated driving remote take over, as shown in fig. 6, the apparatus comprising: a base frame rate determination module 10, a base frame rate adjustment module 20, and a sharpness adjustment module 30. Specifically, the base frame rate determining module 10 in this embodiment is configured to obtain images of each view angle of a vehicle and speed data of the vehicle, and determine a base frame rate corresponding to each view angle image based on the speed data. The basic frame rate adjustment module 20 is configured to obtain content variation degrees of the respective view images, adjust, based on the content variation degrees, basic frame rates corresponding to the respective view images, obtain target frame rates corresponding to the respective view images, and determine a spliced image to be transmitted based on the target frame rates. The sharpness adjustment module 30 is configured to obtain motion behavior information of the vehicle, adjust sharpness of the stitched image based on the motion behavior information, and perform image encoding and transmission on the adjusted stitched image.
In one implementation, the base frame rate determination module 10 includes:
The speed comparison unit is used for comparing the speed data with a preset speed threshold value;
a first linear relationship obtaining unit, configured to obtain a preset first linear relationship if the speed data is smaller than the speed threshold, where the first linear relationship is used to reflect a correspondence between the speed data and the base frame rate;
a first basic frame rate determining unit configured to determine the basic frame rate according to the first linear relationship;
and the second basic frame rate determining unit is used for acquiring a preset first frame rate value if the speed data is larger than the speed threshold value, and taking the first frame rate value as the basic frame rate.
In one implementation, the base frame rate adjustment module 20 includes:
a current contour obtaining unit, configured to obtain current contour data corresponding to a current frame of the respective view angle images;
a history contour obtaining unit, configured to obtain history contour data corresponding to a previous frame of the each view angle image;
and the change degree determining unit is used for determining the content change degree according to the current contour data and the historical contour data, wherein the content change degree is used for reflecting the contour difference degree between the current frame and the last frame of each visual angle image.
In one implementation, the base frame rate adjustment module 20 includes:
the difference threshold comparison unit is used for acquiring a preset first contour difference threshold and a preset second contour difference threshold, and comparing the content change degree with the first contour difference threshold and the second contour difference threshold respectively, wherein the first contour difference threshold is smaller than the second contour difference threshold;
a second linear relationship obtaining unit, configured to obtain a preset second linear relationship if a content change degree of any one of the view images is greater than the first contour difference threshold and less than the second contour threshold, where the second linear relationship is used to reflect a correspondence between the content change degree and the target frame rate;
a first reference frame rate adjustment unit, configured to adjust a reference frame rate corresponding to the view angle image according to the second linear relationship, so as to obtain the target frame rate of the view angle image;
a second reference frame rate adjustment unit, configured to set a target frame rate corresponding to the view angle image as a second frame rate value if a content change degree of any one of the view angle images is smaller than the first contour difference threshold;
And the third reference frame rate adjusting unit is used for setting the target frame rate corresponding to the view images as a third frame rate value if the content change degree of any view image in the view images is larger than the second contour difference threshold value.
In one implementation, the base frame rate adjustment module 20 includes:
a stitching mode determining unit, configured to determine a stitching mode of the images of each view angle;
and the view angle image stitching unit is used for respectively superposing three view angle images in each view angle image to the rest view angle images based on the stitching mode to obtain the stitched image. In one implementation, the sharpness adjustment module 30 includes:
the first definition adjusting unit is used for reducing the compression rate of the back view angle image in the spliced image if the movement behavior information is that the vehicle is backward;
the second definition adjusting unit is used for reducing the compression rate of the front view image in the spliced image if the movement behavior information is that the vehicle advances;
the third definition adjusting unit is used for reducing the compression ratio of the left front view angle image and the left rear view angle image in the spliced image if the movement behavior information is that the vehicle turns left;
And the fourth definition adjusting unit is used for reducing the compression ratio of the right front view angle image and the right rear view angle image in the spliced image if the movement behavior information is that the vehicle turns right.
The working principle of the functional module in the image transmission device for automatic driving remote take-over in the embodiment is the same as that of the method steps in the above method embodiment, and will not be repeated here.
Based on the above embodiment, the present invention also provides a vehicle-mounted terminal, and a schematic block diagram of the vehicle-mounted terminal may be shown as 7. The in-vehicle terminal may include one or more processors 100 (only one shown in fig. 7), a memory 101, and a computer program 102 stored in the memory 101 and executable on the one or more processors 100, for example, a program for automating the transfer of images taken over remotely. The execution of the computer program 102 by the one or more processors 100 may implement various steps in an embodiment of a method for automated driving remote take over of image transmission. Alternatively, the one or more processors 100, when executing the computer program 102, may implement the functions of the various modules/units in the image transmission device embodiment for automated driving remote take over, without limitation.
In one embodiment, the processor 100 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In one embodiment, the memory 101 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 101 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the electronic device. Further, the memory 101 may also include both an internal storage unit and an external storage device of the electronic device. The memory 101 is used to store computer programs and other programs and data required for the in-vehicle terminal. The memory 101 may also be used to temporarily store data that has been output or is to be output.
It will be appreciated by those skilled in the art that the schematic block diagram shown in fig. 7 is merely a block diagram of a portion of the structure related to the present invention and does not constitute a limitation of the vehicle-mounted terminal to which the present invention is applied, and that a specific vehicle-mounted terminal may include more or less components than those shown in the drawings, or may combine some components, or may have different component arrangements.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium and which, when executed, may comprise the steps of the above-described embodiments of the methods. Any reference to memory, storage, operational database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual operation data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (11)

1. An image transmission method for automatic driving remote take-over, the method comprising:
acquiring images of all view angles of a vehicle and speed data of the vehicle, and determining a basic frame rate corresponding to the images of all view angles based on the speed data;
respectively acquiring content variation degrees of the view images, respectively adjusting basic frame rates corresponding to the view images based on the content variation degrees to obtain target frame rates corresponding to the view images, and determining spliced images to be transmitted based on the target frame rates;
and acquiring the movement behavior information of the vehicle, adjusting the definition of the spliced image based on the movement behavior information, and carrying out image coding and transmission on the adjusted spliced image.
2. The image transmission method for an autopilot remote take over according to claim 1, wherein said determining a base frame rate corresponding to when the respective view images are stitched based on the speed data comprises:
comparing the speed data with a preset speed threshold;
if the speed data is smaller than the speed threshold, acquiring a preset first linear relation, wherein the first linear relation is used for reflecting the corresponding relation between the speed data and the basic frame rate;
and determining the basic frame rate according to the first linear relation.
3. The image transmission method for an autopilot remote take over according to claim 2, wherein the determining, based on the speed data, a base frame rate to which the respective view images are spliced, includes:
and if the speed data is larger than the speed threshold, acquiring a preset first frame rate value, and taking the first frame rate value as the basic frame rate.
4. The image transmission method for automated driving remote takeover according to claim 1, wherein the respectively acquiring the content variation degree of the respective view angle images includes:
Acquiring current contour data corresponding to a current frame of each view angle image;
acquiring historical contour data corresponding to a previous frame of each view angle image;
and determining the content change degree according to the current contour data and the historical contour data, wherein the content change degree is used for reflecting the contour difference degree between the current frame and the last frame of each visual angle image.
5. The image transmission method for remote take over of automatic driving according to claim 4, wherein the adjusting the base frame rate corresponding to each view image based on the content change degree to obtain the target frame rate corresponding to each view image includes:
acquiring a preset first profile difference threshold and a preset second profile difference threshold, and comparing the content change degree with the first profile difference threshold and the second profile difference threshold respectively, wherein the first profile difference threshold is smaller than the second profile difference threshold;
if the content change degree of any one of the view images is larger than the first contour difference threshold and smaller than the second contour threshold, acquiring a preset second linear relation, wherein the second linear relation is used for reflecting the corresponding relation between the content change degree and the target frame rate;
And according to the second linear relation, adjusting the reference frame rate corresponding to the visual angle image to obtain the target frame rate of the visual angle image.
6. The image transmission method for remote take over of automatic driving according to claim 5, wherein the adjusting the base frame rate corresponding to each view image based on the content change degree to obtain the target frame rate corresponding to each view image includes:
if the content change degree of any one of the view images is smaller than the first contour difference threshold value, setting a target frame rate corresponding to the view image as a second frame rate value;
and if the content change degree of any one of the view images is larger than the second contour difference threshold value, setting the target frame rate corresponding to the view image as a third frame rate value.
7. The image transmission method for automated driving remote takeover according to claim 1, wherein said determining a stitched image to be transmitted based on said target frame rate includes:
determining a splicing mode of each view angle image;
and respectively overlapping three view angle images in each view angle image after the frame rate adjustment to the rest view angle images based on the stitching mode to obtain the stitching image.
8. The image transmission method for an automated driving remote take over according to claim 1, wherein the adjusting of the sharpness of the stitched image based on the athletic performance information comprises:
if the movement behavior information is that the vehicle is backward, reducing the compression rate of the back view angle image in the spliced image;
if the movement behavior information is that the vehicle advances, the compression rate of the front view image in the spliced image is reduced;
if the movement behavior information is that the vehicle turns left, the compression ratio of the left front view angle image and the left rear view angle image in the spliced image is reduced;
and if the movement behavior information is that the vehicle turns right, reducing the compression ratio of the right front view image and the right rear view image in the spliced image.
9. An image transmission apparatus for automated driving remote take over, the apparatus comprising:
the system comprises a basic frame rate determining module, a speed determining module and a speed determining module, wherein the basic frame rate determining module is used for acquiring images of all view angles of a vehicle and speed data of the vehicle and determining basic frame rates corresponding to the images of all view angles based on the speed data;
the base frame rate adjustment module is used for respectively acquiring the content change degree of each view angle image, respectively adjusting the base frame rate corresponding to each view angle image based on the content change degree to obtain a target frame rate corresponding to each view angle image, and determining a spliced image to be transmitted based on the target frame rate;
The definition adjusting module is used for acquiring the movement behavior information of the vehicle, adjusting the definition of the spliced image based on the movement behavior information, and carrying out image coding and transmission on the adjusted spliced image.
10. A vehicle-mounted terminal, characterized in that it comprises a memory, a processor and an image transmission program for automatic remote take-over stored in the memory and executable on the processor, said processor implementing the steps of the image transmission method for automatic remote take-over according to any one of claims 1-8 when executing said image transmission program for automatic remote take-over.
11. A computer-readable storage medium, on which an image transmission program for automated driving remote take-over is stored, which, when executed by a processor, implements the steps of the image transmission method for automated driving remote take-over as claimed in any one of claims 1 to 8.
CN202211444829.1A 2022-11-18 2022-11-18 Image transmission method for automatic driving remote take-over and vehicle-mounted terminal Pending CN116112475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211444829.1A CN116112475A (en) 2022-11-18 2022-11-18 Image transmission method for automatic driving remote take-over and vehicle-mounted terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211444829.1A CN116112475A (en) 2022-11-18 2022-11-18 Image transmission method for automatic driving remote take-over and vehicle-mounted terminal

Publications (1)

Publication Number Publication Date
CN116112475A true CN116112475A (en) 2023-05-12

Family

ID=86264629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211444829.1A Pending CN116112475A (en) 2022-11-18 2022-11-18 Image transmission method for automatic driving remote take-over and vehicle-mounted terminal

Country Status (1)

Country Link
CN (1) CN116112475A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819161A (en) * 2019-01-21 2019-05-28 北京中竞鸽体育文化发展有限公司 A kind of method of adjustment of frame per second, device, terminal and readable storage medium storing program for executing
CN113056904A (en) * 2020-05-28 2021-06-29 深圳市大疆创新科技有限公司 Image transmission method, movable platform and computer readable storage medium
KR20220050103A (en) * 2021-04-16 2022-04-22 아폴로 인텔리전트 커넥티비티 (베이징) 테크놀로지 씨오., 엘티디. Method for outputting early warning information, device, storage medium and program product
WO2022133782A1 (en) * 2020-12-23 2022-06-30 深圳市大疆创新科技有限公司 Video transmission method and system, video processing method and device, playing terminal, and movable platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109819161A (en) * 2019-01-21 2019-05-28 北京中竞鸽体育文化发展有限公司 A kind of method of adjustment of frame per second, device, terminal and readable storage medium storing program for executing
CN113056904A (en) * 2020-05-28 2021-06-29 深圳市大疆创新科技有限公司 Image transmission method, movable platform and computer readable storage medium
WO2022133782A1 (en) * 2020-12-23 2022-06-30 深圳市大疆创新科技有限公司 Video transmission method and system, video processing method and device, playing terminal, and movable platform
KR20220050103A (en) * 2021-04-16 2022-04-22 아폴로 인텔리전트 커넥티비티 (베이징) 테크놀로지 씨오., 엘티디. Method for outputting early warning information, device, storage medium and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高大龙;黄雅平;李清勇;王胜春;罗四维;: "基于列车前向运动视频的全景图拼接算法", 山东大学学报(工学版), no. 06, 19 November 2013 (2013-11-19) *

Similar Documents

Publication Publication Date Title
CN108377345B (en) Exposure parameter value determination method and device, multi-view camera and storage medium
CN110855883B (en) Image processing system, method, device equipment and storage medium
TWI578271B (en) Dynamic image processing method and dynamic image processing system
US9041807B2 (en) Image processing device and image processing method
CN109765902B (en) Unmanned vehicle driving reference line processing method and device and vehicle
CN111937380B (en) Image processing apparatus
CN107220930B (en) Fisheye image processing method, computer device and computer readable storage medium
CN113436572B (en) Correction method and device irrelevant to direction of LED box body and LED display screen
US20200134782A1 (en) Image stitching processing method and system thereof
CN111986088B (en) Image processing method, device, storage medium and terminal equipment
CN113691776A (en) In-vehicle camera system and light supplementing method
CN116112475A (en) Image transmission method for automatic driving remote take-over and vehicle-mounted terminal
CN112685125B (en) Theme switching method and device of application program and computer readable storage medium
JP7075273B2 (en) Parking support device
CN112584030A (en) Driving video recording method and electronic equipment
CN112486684B (en) Driving image display method, device and platform, storage medium and embedded equipment
CN111314615B (en) Method and device for controlling binocular double-zoom camera and camera
CN115278104B (en) Image brightness adjustment method and device, electronic equipment and storage medium
US7953292B2 (en) Semiconductor integrated circuit device and rendering processing display system
CN115278068A (en) Weak light enhancement method and device for vehicle-mounted 360-degree panoramic image system
CN113132637B (en) Image processing method, image processing chip, application processing chip and electronic equipment
US11523053B2 (en) Image processing apparatus
CN113542847B (en) Image display method, device, equipment and storage medium
CN113954835B (en) Method and system for controlling vehicle to travel at intersection and computer readable storage medium
CN114604175B (en) Method, processor, device and system for determining engineering vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination