CN116437120B - Video framing processing method and device - Google Patents

Video framing processing method and device Download PDF

Info

Publication number
CN116437120B
CN116437120B CN202310425956.5A CN202310425956A CN116437120B CN 116437120 B CN116437120 B CN 116437120B CN 202310425956 A CN202310425956 A CN 202310425956A CN 116437120 B CN116437120 B CN 116437120B
Authority
CN
China
Prior art keywords
video
video frame
processing
processing result
image quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310425956.5A
Other languages
Chinese (zh)
Other versions
CN116437120A (en
Inventor
徐波
丁赞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Senyun Intelligent Technology Co ltd
Original Assignee
Shenzhen Senyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Senyun Intelligent Technology Co ltd filed Critical Shenzhen Senyun Intelligent Technology Co ltd
Priority to CN202310425956.5A priority Critical patent/CN116437120B/en
Publication of CN116437120A publication Critical patent/CN116437120A/en
Application granted granted Critical
Publication of CN116437120B publication Critical patent/CN116437120B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a video framing processing method and device. The video framing processing method comprises the following steps: acquiring a video frame to be processed; determining a first target video frame in the first video frame and determining a second target video frame in the second video frame; the image quality of the first target video frame is greater than a first preset image quality, the image quality of the second target video frame is greater than a second preset image quality, and the first preset image quality is greater than the second preset image quality; processing a first target video frame through a preset first video frame processing model to obtain a first processing result, and processing a second target video frame through the first video frame processing model to obtain a second processing result; and processing the remaining video frames in the first video frame and the remaining video frames in the second video frame based on the first processing result and the second processing result. The method realizes the framing treatment of the video frames, improves the treatment efficiency of the video frames and improves the accuracy of the treatment result.

Description

Video framing processing method and device
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method and apparatus for processing video frames.
Background
With the development of artificial intelligence technology, unmanned devices have also developed, for example: and (5) unmanned vehicles. In the advancing process of the unmanned equipment, video frame acquisition is carried out by means of the video acquisition equipment, and based on the acquired video frames, the prediction of the obstacle can be carried out so as to execute a corresponding obstacle avoidance strategy.
However, because the unmanned equipment needs more data to be processed, if each video frame is processed according to a unified processing mode, the video frame processing efficiency is lower; and, the accuracy of the final video frame processing result is also not good.
Disclosure of Invention
The invention aims to provide a video framing processing method and device, which can realize framing processing of video frames and improve processing efficiency of the video frames and accuracy of processing results.
To achieve the above object, an embodiment of the present application provides a video framing processing method, including: acquiring a video frame to be processed; the video frames to be processed comprise a first video frame acquired by a first video acquisition device and a second video frame acquired by a second video acquisition device; determining a first target video frame of the first video frames and a second target video frame of the second video frames; the image quality of the first target video frame is greater than a first preset image quality, the image quality of the second target video frame is greater than a second preset image quality, and the first preset image quality is greater than the second preset image quality; processing the first target video frame through a preset first video frame processing model to obtain a first processing result, and processing the second target video frame through the first video frame processing model to obtain a second processing result; the first processing result is used for indicating whether the first target video frame comprises a target object or not, and the second processing result is used for indicating whether the second target video frame comprises the target object or not; and processing the rest video frames in the first video frames and the rest video frames in the second video frames based on the first processing result and the second processing result.
In one possible implementation manner, the video framing processing method is applied to an unmanned device, and the first video acquisition device is arranged in a first direction of the unmanned device and is used for acquiring video images in the first direction; the second video acquisition device is arranged in a second direction of the unmanned device and is used for acquiring video images in the second direction; the angle difference between the first direction and the moving direction of the unmanned equipment is a first angle difference, the angle difference between the second direction and the moving direction is a second angle difference, and the difference between the first angle difference and the second angle difference is a preset difference.
In a possible implementation manner, the video image acquisition period of the first video acquisition device is a first period, and the video image acquisition period of the second video acquisition device is a second period; the difference between the first period and the second period is determined based on the preset difference, and the sum of the first period and the second period is determined based on the device travel distance and/or the device travel speed of the unmanned device.
In one possible implementation manner, the video framing processing method further includes: determining an initial value of a sum of the first period and the second period; in the traveling process of the unmanned equipment, if the equipment traveling distance is detected to be increased by a preset traveling distance, the initial value is reduced by a first preset value, and if the equipment traveling speed is detected to be increased by a preset traveling speed, the initial value is reduced by a second preset value; if the equipment travelling speed is detected to be reduced by the preset travelling speed, the initial value is increased by a third preset value; if the equipment travelling distance is detected to be increased by a preset travelling distance, and the equipment travelling speed is reduced by the preset travelling speed, the initial value is kept unchanged.
In one possible implementation manner, the video framing processing method further includes: acquiring first equipment information of the first video acquisition equipment and acquiring second equipment information of the second video acquisition equipment; determining a first estimated image quality according to the first equipment information, and determining a second estimated image quality according to the second equipment information; if the difference between the first estimated image quality and the second estimated image quality is larger than a preset difference, determining the first estimated image quality as the first preset image quality, and determining the second estimated image quality as the second preset image quality.
In one possible implementation manner, the video framing processing method further includes: if the difference between the first predicted image and the second predicted image quality is smaller than or equal to the preset difference, determining the first preset image quality according to a preset image quality influence value and the first predicted image quality, and determining the second preset image quality according to the preset image quality influence value and the second predicted image quality; the preset image quality influence value is used for representing the influence of external influence factors on the image quality of the first video acquisition device and the second video acquisition device.
In a possible implementation manner, the processing the remaining video frames in the first video frame and the remaining video frames in the second video frame based on the first processing result and the second processing result includes: if the first processing result indicates that the first target video frame comprises the target object, and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the first video frame through a preset second video frame processing model, and obtaining a third processing result; determining a final processing result based on the first processing result, the second processing result, and the third processing result; the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
In one possible implementation manner, the video framing processing method further includes: if the first processing result indicates that the first target video frame does not include the target object, and the second processing result indicates that the second target video frame includes the target object, processing the rest video frames in the second video frame through the second video frame processing model to obtain a fourth processing result; determining a final processing result based on the first processing result, the second processing result, and the fourth processing result; the final processing result is used for indicating whether the second target object obstacle avoidance strategy needs to be executed.
In one possible implementation manner, the video framing processing method further includes: if the first processing result indicates that the first target video frame does not include the target object, and the second processing result indicates that the second target video frame does not include the target object, processing the remaining video frames in the first video frame through the second video frame processing model to obtain a fifth processing result, and processing the remaining video frames in the second video frame through the second video frame processing model to obtain a sixth processing result; determining a final processing result according to the fifth processing result and the sixth processing result; the final processing result is used for indicating whether the third target object obstacle avoidance strategy needs to be executed.
The embodiment of the application provides a video framing processing device, which comprises: each functional module is used for realizing the video framing processing method and one or more corresponding embodiments.
The embodiment of the application also provides electronic equipment, which comprises: the device comprises a processor and a memory, wherein the processor is in communication connection with the memory; the memory stores instructions executable by the processor to enable the processor to perform the video framing method according to any one of the above embodiments.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a computer, performs the video framing processing method described in any one of the foregoing embodiments.
Compared with the prior art, the video framing processing method, the video framing processing device, the electronic equipment and the computer readable storage medium according to the embodiment of the application respectively carry out framing processing on video frames acquired by different video acquisition equipment based on image quality, and obtain corresponding processing results based on the video frames after framing processing and a video frame processing model; and finally, processing the rest video frames based on the processing result. By the method for processing the video frames, the framing processing of the video frames based on image quality is realized, and the processing efficiency of the video frames is improved; and the obtained processing results are not single processing results, so that the final processing results can be determined based on the processing results, and the accuracy of the video frame processing results is improved. Therefore, the technical scheme can improve the accuracy of the video frame processing result while improving the video frame processing efficiency.
Drawings
FIG. 1 is a flow chart of a video framing processing method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a video framing processing device according to an embodiment of the present application;
fig. 3 is a schematic structural view of the unmanned apparatus according to an embodiment of the present application.
Detailed Description
The following detailed description of specific embodiments of the present application is made with reference to the accompanying drawings, but it is to be understood that the scope of protection of the present application is not limited by the specific embodiments.
Throughout the specification and claims, unless explicitly stated otherwise, the term "comprise" or variations thereof such as "comprises" or "comprising", etc. will be understood to include the stated element or component without excluding other elements or components.
The technical scheme provided by the embodiment of the application can be applied to various application scenes needing to process video frames; in some application scenarios, the video frame processing result is identifying a target object in the video frame, and based on the identification result of the target object, further processing can be performed.
The technical scheme provided by the embodiment of the application can be applied to the unmanned field and used for processing the video frames of the unmanned equipment. Wherein the unmanned device is for example: unmanned vehicles, intelligent robots, etc.
For these unmanned apparatuses, a large amount of data needs to be processed, and if the processing efficiency of the data is low, the operation stability, the operation safety, and the like of the unmanned apparatus are affected. Therefore, a technical solution capable of improving video frame processing efficiency is needed to ensure operation stability and operation safety of the unmanned aerial vehicle.
Based on the above, the technical scheme provided by the embodiment of the application can be applied to the unmanned equipment, and particularly can be applied to the video frame processing module of the unmanned equipment.
Referring to fig. 1, a flowchart of a video framing processing method according to an embodiment of the present application is provided, where the video framing processing method includes:
step 101, obtaining a video frame to be processed.
In some embodiments, the video frame to be processed comprises: a first video frame acquired by the first video acquisition device and a second video frame acquired by the second video acquisition device.
In some embodiments, the first video capture device and the second video capture device may be: a moving camera, a video camera, or the like, which has an image and video capturing function, is not limited herein.
In some embodiments, the resolution of the video frames acquired by the first video acquisition device may be different from the resolution of the video frames acquired by the second video acquisition device. For example: the resolution of the video frames acquired by the first video acquisition device is higher than the resolution of the video frames acquired by the second video acquisition device.
As an optional implementation manner, the video framing processing method is applied to the unmanned device, and the first video acquisition device is arranged in a first direction of the unmanned device and is used for acquiring video images in the first direction; the second video acquisition device is arranged in a second direction of the unmanned device and is used for acquiring video images in the second direction; the angle difference between the first direction and the moving direction of the unmanned equipment is a first angle difference, the angle difference between the second direction and the moving direction is a second angle difference, and the difference between the first angle difference and the second angle difference is a preset difference.
In some embodiments, assuming that the direction of movement of the unmanned device is straight ahead, the first angle difference is 45 degrees, the first direction may be left front or right front; the second angle difference is 90 degrees and the second direction may be left or right.
In some embodiments, the first angle difference, the second angle difference, and the preset difference may be determined in conjunction with a difference between image quality of video captured by the first video capture device and the second video capture device.
For example, if the difference between the image quality of the video captured by the first video capturing apparatus and the second video capturing apparatus is large, the preset difference may be small. And if the image quality of the video acquired by the first video acquisition device is higher than the image quality of the video acquired by the second video acquisition device, the first angle difference is smaller than the second angle difference; conversely, the first angle difference is greater than the second angle difference.
In some embodiments, if there is no gap between the image quality of the video captured by the first video capture device and the second video capture device, the values of the first angle difference and the second angle difference may be equal, but one is positive and one is negative. And the preset difference value is twice the absolute value of the first angle difference or the second angle difference.
In other embodiments, the first direction, the second direction, the first angle difference, the second angle difference, and the preset difference may also be configured in combination with a specific application scenario, which is not limited herein.
In some embodiments, since the unmanned device is configured with two video capture devices, the capture and processing of video frames is not necessarily required at all times during the travel of the unmanned device. Thus, in order to reduce the power consumption of the system, as an alternative embodiment, the first video capturing device and the second video capturing device may perform image capturing periodically.
In some embodiments, the video image acquisition period of the first video acquisition device is a first period, and the video image acquisition period of the second video acquisition device is a second period; the difference between the first period and the second period is determined based on a preset difference, and the sum of the first period and the second period is determined based on the device travel distance and/or the device travel speed of the unmanned device.
In some embodiments, assuming the first period is T1 and the second period is T2, the difference between T1-T2 is the difference between the start time of T1 and the start time of T2; or the difference between the end time of T1 and the end time of T2.
In some embodiments, the difference between the first period and the second period may be proportional to the preset difference or inversely proportional to the preset difference. For example, the larger the preset difference, the larger the difference between the first period and the second period; or the smaller the difference between the first period and the second period, etc.
In some embodiments, the video framing processing method further comprises: determining an initial value of a sum of the first period and the second period; in the running process of the unmanned equipment, if the running distance of the equipment is detected to be increased by a preset running distance, reducing the initial value by a first preset value, and if the running speed of the equipment is detected to be increased by a preset running speed, reducing the initial value by a second preset value; if the equipment travelling speed is detected to be reduced by the preset travelling speed, increasing the initial value by a third preset value; if the equipment traveling distance is detected to be increased by the preset traveling distance, and the equipment traveling speed is reduced by the preset traveling speed, the initial value is kept unchanged.
In this embodiment, an initial value of the sum of the first period and the second period may be preset, and then the initial value is changed with the traveling condition during the traveling of the unmanned device, and on the premise that the initial value is changed, the first period and/or the second period is correspondingly required to be changed.
In some embodiments, the relationship between the preset travel distance and the first preset value may be preconfigured, for example: the preset travel distance is 1km, the first preset value is 10 minutes.
In some embodiments, the relationship of the preset travel speed to the second preset value may be preconfigured, for example: the preset travel speed is 0.5km/S, the second preset value is 5 minutes.
In some embodiments, the relationship of the preset travel speed and the third preset value may be preconfigured, for example: the preset travel speed is 0.5km/S, the third preset value is 10 minutes.
The above values are merely examples, and each value may be flexibly configured in different application scenarios.
In some embodiments, if the device travel distance increases and the device travel speed decreases, the description pertains to a situation where the first period and the second period do not need to be changed, and the initial value may be kept unchanged.
In some embodiments, the first period and/or the second period may be increased if the initial value is increased. If only the first period or the second period is increased, the increased value of the first period or the second period is the same as the increased value of the initial value; if the first period and the second period are increased, the sum of the increased values of the first period and the second period is the same as the increased value of the initial value.
Therefore, after the updated value of the initial value is determined, the first period and the second period are correspondingly adjusted based on the updated value and the updating mode, and the video frame acquisition strategies of the first video acquisition device and the second video acquisition device are changed in the advancing process of the unmanned equipment.
In some embodiments, there may be a difference between the video frames acquired by the first video acquisition device and the second video acquisition device, respectively; and the first video capture device may have differences between video frames captured at different times and the second video capture device may have differences between video frames captured at different times. Therefore, framing processing is required for the video frames to be processed.
Step 102, determining a first target video frame of the first video frames and determining a second target video frame of the second video frames.
The image quality of the first target video frame is larger than the first preset image quality, the image quality of the second target video frame is larger than the second preset image quality, and the first preset image quality is larger than the second preset image quality.
In some embodiments, image quality may be determined by image quality parameters such as image resolution, image sharpness, and image frame rate. Thus, for each video frame, an image quality parameter is determined separately, and then an image quality is determined based on the image quality parameter.
In some embodiments, the image quality is the same as the image quality parameter, and may be a value after rounding and normalization of the image quality parameter.
In some embodiments, if the image quality parameter is only one, the one image quality parameter may determine the image quality. If the image quality parameter includes a plurality of image quality parameters, the image quality may be determined based on an integrated value of the plurality of image quality parameters.
After determining the image quality of each video frame, a first target video frame may be determined based on the first preset image quality. And, based on the second preset image quality, a second target video frame may be determined.
In some embodiments, the first preset image quality may be determined from a first video capture device and the second preset image quality may be determined from a second video capture device.
Thus, as an alternative embodiment, the video framing processing method further includes: acquiring first equipment information of first video acquisition equipment and acquiring second equipment information of second video acquisition equipment; determining a first predicted image quality according to the first equipment information, and determining a second predicted image quality according to the second equipment information; if the difference between the first estimated image quality and the second estimated image quality is greater than the preset difference, determining the first estimated image quality as the first preset image quality, and determining the second estimated image quality as the second preset image quality.
In such an embodiment, the first device information may be base information of the first video capture device and the second device information may be base information of the second video capture device.
In some embodiments, the implementation of determining the predicted image quality based on the device information may refer to well-established techniques in the art and will not be described in detail herein.
In some embodiments, if the difference between the first predicted image quality and the second predicted image quality is greater than the preset difference, the first predicted image quality may be determined directly as the first preset image quality and the second predicted image quality may be determined as the second preset image quality.
In other embodiments, if the difference between the first predicted image quality and the second predicted image quality is less than or equal to the preset difference, determining the first preset image quality according to the preset image quality influence value and the first predicted image quality, and determining the second preset image quality according to the preset image quality influence value and the second predicted image quality; the preset image quality influence value is used for representing the influence of external influence factors on the image quality of the first video acquisition device and the image quality of the second video acquisition device.
In some embodiments, the preset image quality impact value may be determined by a maintenance person of the unmanned device; it may also be determined from a priori data, i.e. in combination with historical data.
In some embodiments, the sum of the preset image quality impact value and the first predicted image quality may be determined as a first preset image quality; and subtracting the preset image quality influence value on the basis of the second estimated image quality, wherein the obtained value is the second preset image quality.
In some embodiments, external influencing factors, such as: influence caused by the traveling environment of the unmanned device; and the influence caused by equipment loss of unmanned equipment.
In some embodiments, the first preset image quality may be greater than the second preset image quality preset value. Or, only the first preset image quality is required to be ensured to be larger than the second preset image quality.
Step 103, processing the first target video frame through a preset first video frame processing model to obtain a first processing result, and processing the second target video frame through the first video frame processing model to obtain a second processing result.
The first processing result is used for indicating whether the first target video frame comprises a target object or not, and the second processing result is used for indicating whether the second target video frame comprises the target object or not.
In some embodiments, the target object is an obstacle with respect to the unmanned device, such as: trees, buildings, other vehicles, etc.
In some embodiments, the first video frame processing model may be an object detection model, which may be pre-trained, and the trained object detection model may be used directly for detection of the target object.
In some embodiments, the first video frame processing model may be a neural network model, a random forest model, or the like.
In some embodiments, the training dataset corresponding to the first video frame processing model includes a plurality of video frames corresponding to a tag of whether the target object is included. Thus, the initial first video frame processing model is trained based on the training data set, so that the trained video frame processing model can be used for detection of the target object.
In some embodiments, in the training process of the first video frame processing model, some implementation manners of increasing the model precision may be used to improve the precision of the trained first video frame processing model. For example, the training times are preset, and after the training times are reached, training of the video frame processing model is considered to be completed; for another example, a test dataset is preset, and a video frame processing model is continuously optimized based on the accuracy test result of the test dataset.
In some embodiments, the first processing result may indicate that the first target video frame includes a target object or does not include a target object; likewise, the second processing result may indicate that the second target video frame includes the target object or does not include the target object. Based on different processing result conditions, different processing can be performed on the remaining video frames in the first video frame and the remaining video frames in the second video frame.
And 104, processing the rest video frames in the first video frame and the rest video frames in the second video frame based on the first processing result and the second processing result.
As an alternative embodiment, step 104 includes: if the first processing result indicates that the first target video frame comprises the target object and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the first video frame through a preset second video frame processing model to obtain a third processing result; determining a final processing result based on the first processing result, the second processing result and the third processing result; the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
In some embodiments, the second video frame processing model is similar to the implementation of the first video frame processing model, except that the image quality of the video frames in the training data set of the second video frame processing model is lower than the image quality of the video frames in the training data of the first video frame processing model.
Thus, for the second video frame processing model, the second video frame processing model may be a neural network model, a random forest model, or the like.
In some embodiments, the training dataset corresponding to the second video frame processing model includes a plurality of video frames corresponding to a tag of whether the target object is included. Thus, an initial second video frame processing model is trained based on the training data set such that the trained video frame processing model is usable for detection of the target object.
In some embodiments, in the training process of the second video frame processing model, some implementation manners of increasing the model precision may be used to improve the precision of the trained second video frame processing model. For example, the training times are preset, and after the training times are reached, training of the video frame processing model is considered to be completed; for another example, a test dataset is preset, and a video frame processing model is continuously optimized based on the accuracy test result of the test dataset.
Thus, after the remaining video frames in the first video frame are input into the second video frame processing model, the second video frame processing model may output a third processing result, which may indicate whether the target object is included in the remaining video frames.
Further, a final processing result may be determined based on the first processing result, the second processing result, and the third processing result. And the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
In some embodiments, if the third processing result indicates that the remaining video frames include the target object, the first target object obstacle avoidance policy needs to be executed. If the third processing result indicates that the residual video frames do not include the target object, the first target object obstacle avoidance strategy does not need to be executed.
In some embodiments, a first target object obstacle avoidance policy may be preconfigured, where the first target object obstacle avoidance policy is used to indicate an obstacle avoidance start time, an obstacle avoidance execution duration, and the like, and the specific obstacle avoidance policy still needs to be formulated by an obstacle avoidance module of the unmanned driving device; however, the obstacle avoidance module may formulate an obstacle avoidance policy with reference to the first target object obstacle avoidance policy.
In some embodiments, the video framing processing method further comprises: if the first processing result indicates that the first target video frame does not comprise the target object and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the second video frame through a second video frame processing model to obtain a fourth processing result; determining a final processing result based on the first processing result, the second processing result, and the fourth processing result; the final processing result is used for indicating whether the second target object obstacle avoidance strategy needs to be executed.
In some embodiments, if the number of the target objects included in the corresponding video frame indicated in the first processing result, the second processing result, and the fourth processing result is greater than or equal to 2, determining that the second target object obstacle avoidance policy needs to be executed; otherwise, determining that the second target object obstacle avoidance strategy does not need to be executed.
In some embodiments, the implementation of the second target object obstacle avoidance strategy is similar to the implementation of the first target object obstacle avoidance strategy, except that the obstacle avoidance execution time, the obstacle avoidance start time, etc. are different.
In some embodiments, the obstacle avoidance intensity of the relevant obstacle avoidance information in the second target object obstacle avoidance strategy is lower than the obstacle avoidance intensity of the relevant obstacle avoidance information in the first target object obstacle avoidance strategy. For example: longer obstacle avoidance starting time, shorter obstacle avoidance execution time, and the like.
In some embodiments, the video framing processing method further comprises: if the first processing result indicates that the first target video frame does not comprise the target object, and the second processing result indicates that the second target video frame does not comprise the target object, processing the remaining video frames in the first video frame through a second video frame processing model to obtain a fifth processing result, and processing the remaining video frames in the second video frame through the second video frame processing model to obtain a sixth processing result; determining a final processing result according to the fifth processing result and the sixth processing result; the final processing result is used for indicating whether the third target object obstacle avoidance strategy needs to be executed.
In some embodiments, if the fifth processing result and/or the sixth processing result indicate that the corresponding video frame includes the target object, it is determined that the third target object obstacle avoidance policy needs to be executed. Otherwise, determining that the third target object obstacle avoidance strategy does not need to be executed.
In some embodiments, the implementation of the third target object obstacle avoidance strategy is similar to the implementation of the first target object obstacle avoidance strategy, except that the obstacle avoidance execution time, the obstacle avoidance start time, and the like are different.
In some embodiments, the obstacle avoidance intensity of the relevant obstacle avoidance information in the third target object obstacle avoidance strategy is lower than the obstacle avoidance intensity of the relevant obstacle avoidance information in the first target object obstacle avoidance strategy. For example: longer obstacle avoidance starting time, shorter obstacle avoidance execution time, and the like.
In other embodiments, other applicable obstacle avoidance strategies may be used in combination with the processing results described above, which are not limited herein.
As can be seen from the description of the foregoing embodiments, the video framing processing scheme respectively performs framing processing on video frames acquired by different video acquisition devices based on image quality, and obtains corresponding processing results based on the video frames and the video frame processing models after the framing processing; and finally, processing the rest video frames based on the processing result. By the method for processing the video frames, the framing processing of the video frames based on image quality is realized, and the processing efficiency of the video frames is improved; and the obtained processing results are not single processing results, so that the final processing results can be determined based on the processing results, and the accuracy of the video frame processing results is improved. Therefore, the technical scheme can improve the accuracy of the video frame processing result while improving the video frame processing efficiency.
Referring to fig. 2, a schematic structural diagram of a video framing processing device according to an embodiment of the present application is provided, where the video framing processing device includes:
an acquisition module 201, configured to acquire a video frame to be processed; the video frames to be processed comprise a first video frame acquired by a first video acquisition device and a second video frame acquired by a second video acquisition device; a processing module 202 configured to determine a first target video frame of the first video frames and a second target video frame of the second video frames; the image quality of the first target video frame is greater than a first preset image quality, the image quality of the second target video frame is greater than a second preset image quality, and the first preset image quality is greater than the second preset image quality; processing the first target video frame through a preset first video frame processing model to obtain a first processing result, and processing the second target video frame through the first video frame processing model to obtain a second processing result; the first processing result is used for indicating whether the first target video frame comprises a target object or not, and the second processing result is used for indicating whether the second target video frame comprises the target object or not; and processing the rest video frames in the first video frames and the rest video frames in the second video frames based on the first processing result and the second processing result.
In some embodiments, the video framing processing method is applied to an unmanned device, and the first video acquisition device is arranged in a first direction of the unmanned device and is used for acquiring video images in the first direction; the second video acquisition device is arranged in a second direction of the unmanned device and is used for acquiring video images in the second direction; the angle difference between the first direction and the moving direction of the unmanned equipment is a first angle difference, the angle difference between the second direction and the moving direction is a second angle difference, and the difference between the first angle difference and the second angle difference is a preset difference.
In some embodiments, the video image acquisition period of the first video acquisition device is a first period, and the video image acquisition period of the second video acquisition device is a second period; the difference between the first period and the second period is determined based on the preset difference, and the sum of the first period and the second period is determined based on the device travel distance and/or the device travel speed of the unmanned device.
In some embodiments, the processing module 202 is further to: determining an initial value of a sum of the first period and the second period; in the traveling process of the unmanned equipment, if the equipment traveling distance is detected to be increased by a preset traveling distance, the initial value is reduced by a first preset value, and if the equipment traveling speed is detected to be increased by a preset traveling speed, the initial value is reduced by a second preset value; if the equipment travelling speed is detected to be reduced by the preset travelling speed, the initial value is increased by a third preset value; if the equipment travelling distance is detected to be increased by a preset travelling distance, and the equipment travelling speed is reduced by the preset travelling speed, the initial value is kept unchanged.
In some embodiments, the acquisition module 201 is further to: acquiring first equipment information of the first video acquisition equipment and acquiring second equipment information of the second video acquisition equipment; the processing module 202 is further configured to: determining a first estimated image quality according to the first equipment information, and determining a second estimated image quality according to the second equipment information; if the difference between the first estimated image quality and the second estimated image quality is larger than a preset difference, determining the first estimated image quality as the first preset image quality, and determining the second estimated image quality as the second preset image quality.
In some embodiments, the processing module 202 is further to: if the difference between the first predicted image and the second predicted image is less than or equal to the preset difference, determining the first preset image quality according to a preset image quality influence value and the first predicted image quality, and determining the second preset image quality according to the preset image quality influence value and the second predicted image quality; the preset image quality influence value is used for representing the influence of external influence factors on the image quality of the first video acquisition device and the second video acquisition device.
In some embodiments, the processing module 202 is further to: if the first processing result indicates that the first target video frame comprises the target object, and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the first video frame through a preset second video frame processing model, and obtaining a third processing result; determining a final processing result based on the first processing result, the second processing result, and the third processing result; the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
In some embodiments, the processing module 202 is further to: if the first processing result indicates that the first target video frame does not include the target object, and the second processing result indicates that the second target video frame includes the target object, processing the rest video frames in the second video frame through the second video frame processing model to obtain a fourth processing result; determining a final processing result based on the first processing result, the second processing result, and the fourth processing result; the final processing result is used for indicating whether the second target object obstacle avoidance strategy needs to be executed.
In some embodiments, the processing module 202 is further to: if the first processing result indicates that the first target video frame does not include the target object, and the second processing result indicates that the second target video frame does not include the target object, processing the remaining video frames in the first video frame through the second video frame processing model to obtain a fifth processing result, and processing the remaining video frames in the second video frame through the second video frame processing model to obtain a sixth processing result; determining a final processing result according to the fifth processing result and the sixth processing result; the final processing result is used for indicating whether the third target object obstacle avoidance strategy needs to be executed.
It will be appreciated that the apparatus corresponds to the video framing processing method described above, and thus, the implementation of each functional module is referred to the foregoing examples and will not be described repeatedly herein.
Referring to fig. 3, an embodiment of the present application further provides an unmanned device, including: processor 301 and memory 302, processor 301 and memory 302 being communicatively coupled. The unmanned device can be used as an execution subject of the video framing processing method.
The memory 302 stores therein instructions executable by the processor 301, which are executed by the processor 301, to enable the processor 301 to perform the video framing processing method described in the foregoing embodiment.
In some embodiments, a communication connection is implemented between processor 301 and memory 302 via a communication bus.
It will be appreciated that the unmanned device may also include more general purpose modules as required, and that the embodiments of the present application are not described in detail.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing descriptions of specific exemplary embodiments of the present application are presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the application to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the present application and its practical application to thereby enable one skilled in the art to make and utilize the present application in various exemplary embodiments and with various modifications as are suited to the particular use contemplated. The scope of the application is intended to be defined by the claims and the equivalents thereof.

Claims (8)

1. A video framing processing method, applied to an unmanned device, comprising:
acquiring a video frame to be processed; the video frames to be processed comprise a first video frame acquired by a first video acquisition device and a second video frame acquired by a second video acquisition device; the first video acquisition device is arranged in a first direction of the unmanned device and is used for acquiring video images in the first direction; the second video acquisition device is arranged in a second direction of the unmanned device and is used for acquiring video images in the second direction; the angle difference between the first direction and the moving direction of the unmanned equipment is a first angle difference, the angle difference between the second direction and the moving direction is a second angle difference, and the difference between the first angle difference and the second angle difference is a preset difference;
determining a first target video frame of the first video frames and a second target video frame of the second video frames; the image quality of the first target video frame is greater than a first preset image quality, the image quality of the second target video frame is greater than a second preset image quality, and the first preset image quality is greater than the second preset image quality;
Processing the first target video frame through a preset first video frame processing model to obtain a first processing result, and processing the second target video frame through the first video frame processing model to obtain a second processing result; the first processing result is used for indicating whether the first target video frame comprises a target object or not, and the second processing result is used for indicating whether the second target video frame comprises the target object or not;
if the first processing result indicates that the first target video frame comprises the target object, and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the first video frame through a preset second video frame processing model, and obtaining a third processing result;
determining a final processing result based on the first processing result, the second processing result, and the third processing result; the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
2. The video framing processing method according to claim 1, wherein a video image acquisition period of the first video acquisition device is a first period, and a video image acquisition period of the second video acquisition device is a second period; the difference between the first period and the second period is determined based on the preset difference, and the sum of the first period and the second period is determined based on the device travel distance and/or the device travel speed of the unmanned device.
3. The video framing processing method according to claim 2, characterized in that the video framing processing method further comprises:
determining an initial value of a sum of the first period and the second period;
in the traveling process of the unmanned equipment, if the equipment traveling distance is detected to be increased by a preset traveling distance, the initial value is reduced by a first preset value, and if the equipment traveling speed is detected to be increased by a preset traveling speed, the initial value is reduced by a second preset value;
if the equipment travelling speed is detected to be reduced by the preset travelling speed, the initial value is increased by a third preset value;
if the equipment travelling distance is detected to be increased by a preset travelling distance, and the equipment travelling speed is reduced by the preset travelling speed, the initial value is kept unchanged.
4. The video framing processing method according to claim 1, characterized in that the video framing processing method further comprises:
acquiring first equipment information of the first video acquisition equipment and acquiring second equipment information of the second video acquisition equipment;
determining a first estimated image quality according to the first equipment information, and determining a second estimated image quality according to the second equipment information;
If the difference between the first estimated image quality and the second estimated image quality is larger than a preset difference, determining the first estimated image quality as the first preset image quality, and determining the second estimated image quality as the second preset image quality.
5. The video framing processing method according to claim 4, further comprising:
if the difference between the first predicted image and the second predicted image is less than or equal to the preset difference, determining the first preset image quality according to a preset image quality influence value and the first predicted image quality, and determining the second preset image quality according to the preset image quality influence value and the second predicted image quality;
the preset image quality influence value is used for representing the influence of external influence factors on the image quality of the first video acquisition device and the second video acquisition device.
6. The video framing processing method according to claim 1, wherein the processing the remaining video frames in the first video frame and the remaining video frames in the second video frame based on the first processing result and the second processing result includes:
If the first processing result indicates that the first target video frame comprises the target object, and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the first video frame through a preset second video frame processing model, and obtaining a third processing result;
determining a final processing result based on the first processing result, the second processing result, and the third processing result; the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
7. The video framing processing method according to claim 1, characterized in that the video framing processing method further comprises:
if the first processing result indicates that the first target video frame does not include the target object, and the second processing result indicates that the second target video frame does not include the target object, processing the remaining video frames in the first video frame through the second video frame processing model to obtain a fifth processing result, and processing the remaining video frames in the second video frame through the second video frame processing model to obtain a sixth processing result;
Determining a final processing result according to the fifth processing result and the sixth processing result; the final processing result is used for indicating whether the third target object obstacle avoidance strategy needs to be executed.
8. A video framing processing device, for application to an unmanned device, the video framing processing device comprising:
the acquisition module is used for acquiring the video frame to be processed; the video frames to be processed comprise a first video frame acquired by a first video acquisition device and a second video frame acquired by a second video acquisition device; the first video acquisition device is arranged in a first direction of the unmanned device and is used for acquiring video images in the first direction; the second video acquisition device is arranged in a second direction of the unmanned device and is used for acquiring video images in the second direction; the angle difference between the first direction and the moving direction of the unmanned equipment is a first angle difference, the angle difference between the second direction and the moving direction is a second angle difference, and the difference between the first angle difference and the second angle difference is a preset difference;
a processing unit for:
Determining a first target video frame of the first video frames and a second target video frame of the second video frames; the image quality of the first target video frame is greater than a first preset image quality, the image quality of the second target video frame is greater than a second preset image quality, and the first preset image quality is greater than the second preset image quality;
processing the first target video frame through a preset first video frame processing model to obtain a first processing result, and processing the second target video frame through the first video frame processing model to obtain a second processing result; the first processing result is used for indicating whether the first target video frame comprises a target object or not, and the second processing result is used for indicating whether the second target video frame comprises the target object or not;
if the first processing result indicates that the first target video frame comprises the target object, and the second processing result indicates that the second target video frame comprises the target object, processing the rest video frames in the first video frame through a preset second video frame processing model, and obtaining a third processing result;
Determining a final processing result based on the first processing result, the second processing result, and the third processing result; the final processing result is used for indicating whether the first target object obstacle avoidance strategy needs to be executed.
CN202310425956.5A 2023-04-20 2023-04-20 Video framing processing method and device Active CN116437120B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310425956.5A CN116437120B (en) 2023-04-20 2023-04-20 Video framing processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310425956.5A CN116437120B (en) 2023-04-20 2023-04-20 Video framing processing method and device

Publications (2)

Publication Number Publication Date
CN116437120A CN116437120A (en) 2023-07-14
CN116437120B true CN116437120B (en) 2024-04-09

Family

ID=87094160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310425956.5A Active CN116437120B (en) 2023-04-20 2023-04-20 Video framing processing method and device

Country Status (1)

Country Link
CN (1) CN116437120B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110958469A (en) * 2019-12-13 2020-04-03 联想(北京)有限公司 Video processing method and device, electronic equipment and storage medium
CN113486907A (en) * 2021-07-12 2021-10-08 深圳慧源创新科技有限公司 Unmanned equipment obstacle avoidance method and device and unmanned equipment
CN114359865A (en) * 2021-12-30 2022-04-15 广州赛特智能科技有限公司 Obstacle detection method and related device
WO2022078463A1 (en) * 2020-10-16 2022-04-21 爱驰汽车(上海)有限公司 Vehicle-based obstacle detection method and device
CN115675526A (en) * 2022-11-11 2023-02-03 东风商用车有限公司 Intelligent driving visual debugging method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110958469A (en) * 2019-12-13 2020-04-03 联想(北京)有限公司 Video processing method and device, electronic equipment and storage medium
WO2022078463A1 (en) * 2020-10-16 2022-04-21 爱驰汽车(上海)有限公司 Vehicle-based obstacle detection method and device
CN113486907A (en) * 2021-07-12 2021-10-08 深圳慧源创新科技有限公司 Unmanned equipment obstacle avoidance method and device and unmanned equipment
CN114359865A (en) * 2021-12-30 2022-04-15 广州赛特智能科技有限公司 Obstacle detection method and related device
CN115675526A (en) * 2022-11-11 2023-02-03 东风商用车有限公司 Intelligent driving visual debugging method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116437120A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN108875603B (en) Intelligent driving control method and device based on lane line and electronic equipment
CN110120024B (en) Image processing method, device, equipment and storage medium
WO2020061489A4 (en) Training neural networks for vehicle re-identification
WO2019047656A1 (en) Method and apparatus for use in controlling driverless vehicle
CN109558901B (en) Semantic segmentation training method and device, electronic equipment and storage medium
CN111079533B (en) Unmanned vehicle driving decision method, unmanned vehicle driving decision device and unmanned vehicle
CN113291321A (en) Vehicle track prediction method, device, equipment and storage medium
CN113793297A (en) Pose determination method and device, electronic equipment and readable storage medium
CN109407679B (en) Method and device for controlling an unmanned vehicle
CN103810696A (en) Method for detecting image of target object and device thereof
CN113052047B (en) Traffic event detection method, road side equipment, cloud control platform and system
CN116437120B (en) Video framing processing method and device
CN113361299B (en) Abnormal parking detection method and device, storage medium and electronic equipment
CN109977884B (en) Target following method and device
CN117274177A (en) Power transmission line external damage prevention method and device based on image recognition
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN116244902A (en) Road environment scene simulation method, electronic equipment and medium for vehicle-road cloud fusion
WO2022179124A1 (en) Image restoration method and apparatus
CN115116039A (en) Vehicle cabin outside sight line tracking method and device, vehicle and storage medium
CN114239736A (en) Method and device for training optical flow estimation model
CN111325075B (en) Video sequence target detection method
CN110177222B (en) Camera exposure parameter adjusting method and device combining idle resources of vehicle machine
CN111812691A (en) Vehicle-mounted terminal and image frame detection processing method and device
CN117808437B (en) Traffic management method, equipment and medium based on virtual simulation technology
Ozawa et al. Recursive Contrast Maximization for Event-Based High-Frequency Motion Estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant