CN110276322B - Image processing method and device combined with vehicle machine idle resources - Google Patents

Image processing method and device combined with vehicle machine idle resources Download PDF

Info

Publication number
CN110276322B
CN110276322B CN201910562835.9A CN201910562835A CN110276322B CN 110276322 B CN110276322 B CN 110276322B CN 201910562835 A CN201910562835 A CN 201910562835A CN 110276322 B CN110276322 B CN 110276322B
Authority
CN
China
Prior art keywords
image
scene
detection
image detection
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910562835.9A
Other languages
Chinese (zh)
Other versions
CN110276322A (en
Inventor
杨文龙
P·尼古拉斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN201910562835.9A priority Critical patent/CN110276322B/en
Publication of CN110276322A publication Critical patent/CN110276322A/en
Application granted granted Critical
Publication of CN110276322B publication Critical patent/CN110276322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an image processing method and device combined with vehicle machine idle resources, wherein the method comprises the following steps: checking the use state parameters of current hardware equipment in a vehicle machine at regular time, and acquiring idle resource parameters of the vehicle machine based on the use state parameters; judging whether the idle resource parameters meet preset conditions of a vehicle machine running appointed scene detection algorithm or not; and if the idle resource parameters meet the preset conditions, operating the scene detection algorithm in the vehicle machine to assist a preset image detection unit to perform visual detection on the image data input to the image detection unit in real time. Based on the scheme provided by the invention, the vehicle machine and the preset image detection unit are interconnected, an auxiliary image processing function is provided, and an auxiliary is provided for a related detection algorithm on the preset image detection unit, so that the detection precision is improved and the cost is saved.

Description

Image processing method and device combined with vehicle machine idle resources
Technical Field
The invention relates to the technical field of automobiles, in particular to an image processing method and device combined with vehicle machine idle resources.
Background
At present, in the fields of automatic driving, auxiliary driving and the like, for automatic parking or an active safety function, a target detection algorithm is often used for assisting in identifying surrounding environment information, and a general target detection algorithm is generally realized through machine learning or deep learning. For the auxiliary driving function, the detection algorithm based on vision generally runs in a proprietary chip, but the computational power of the proprietary chip is very limited due to the cost problem, so the precision and the speed of the vision algorithm are often balanced according to the actual scene, and the speed meets the real-time requirement by sacrificing a small amount of precision, so that the processing precision and the real-time requirement of the vision algorithm cannot be effectively balanced.
Disclosure of Invention
The present invention provides an image processing method and apparatus incorporating in-vehicle idle resources to overcome the above problems or at least partially solve the above problems.
According to an aspect of the present invention, an image processing method combined with in-vehicle idle resources is provided, including:
checking the use state parameters of current hardware equipment in a vehicle machine at regular time, and acquiring idle resource parameters of the vehicle machine based on the use state parameters;
judging whether the idle resource parameters meet preset conditions of a vehicle machine running appointed scene detection algorithm or not;
and if the idle resource parameters meet the preset conditions, operating the scene detection algorithm in the vehicle machine to assist a preset image detection unit to perform visual detection on the image data input to the image detection unit in real time.
Optionally, the running of the scene detection algorithm in the in-vehicle device assists a preset image detection unit to perform visual detection on image data input to the image detection unit in real time, and includes:
acquiring image data input to the image detection unit and processed by an image processor, and operating the scene detection algorithm in the vehicle to identify scene parameters of the image data;
selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and outputting a detection result after the image detection unit adopts the image detection model to perform visual detection on the image data.
Optionally, the acquiring image data input to the image detection unit and processed by an image processor, and operating the scene detection algorithm in the in-vehicle device to identify scene parameters of the image data includes:
acquiring image data input to the image detection unit and processed by an image processor;
running the scene detection algorithm in the vehicle machine to perform scene detection on the image data, and identifying the scene type of the image data, the size and the number of target detection objects in the image data, the real-time requirement and/or the precision requirement;
wherein the scene type comprises a weather type and/or a type of geographic location in the image data.
Optionally, the selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameter includes:
selecting an image detection model from a plurality of pre-established image detection models according to a preset rule based on the scene parameters; and/or
And inputting the scene parameters into a preset classification model, and matching and detecting the model and the model parameters of an image detection model of the image data by the classification model based on the scene parameters.
Optionally, the selecting an image detection model from a plurality of pre-established image detection models according to a preset rule based on the scene parameter includes:
and selecting an image detection model matched with the scene type in the scene parameters from a plurality of pre-established image detection models based on the scene parameters.
Optionally, before the inputting the scene parameters into a preset classification model and the classification model matching and detecting the model and the model parameters of the image detection model of the image data based on the scene parameters, the method further includes:
constructing a classification model;
and collecting scene parameters of different scenes, matching image detection models corresponding to the scenes, and training the classification model by taking the scene parameters of each scene and the corresponding image detection models as the input and the output of the classification model respectively.
Optionally, before selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameter, the method further includes:
and pre-constructing and training an image detection model corresponding to a plurality of scenes.
Optionally, the selecting, based on the scene parameter, a corresponding image detection model from a plurality of pre-established image detection models, and outputting, by the image detection unit, a detection result after performing visual detection on the image data by using the image detection model includes:
selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, calling an image detection model configuration file corresponding to the image detection model by the image detection unit, operating the image detection model to perform visual detection on the image data, and outputting a detection result.
According to another aspect of the present invention, there is also provided an image processing apparatus incorporating in-vehicle idle resources, including:
the checking module is configured to regularly check the use state parameters of each current hardware device in the vehicle machine and acquire the idle resource parameters of the vehicle machine based on the use state parameters;
the judging module is configured to judge whether the idle resource parameters meet preset conditions of the vehicle machine running appointed scene detection algorithm;
and the detection module is configured to operate the scene detection algorithm in the vehicle machine if the idle resource parameters meet the preset conditions, and assist a preset image detection unit in visually detecting image data input to the image detection unit in real time.
Optionally, the detection module is further configured to acquire image data input to the image detection unit and processed by an image processor, and operate the scene detection algorithm in the in-vehicle device to identify scene parameters of the image data;
selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and outputting a detection result after the image detection unit adopts the image detection model to perform visual detection on the image data.
Optionally, the detection module is further configured to acquire image data input to the image detection unit and processed by an image processor;
running the scene detection algorithm in the vehicle machine to perform scene detection on the image data, and identifying the scene type of the image data, the size and the number of target detection objects in the image data, the real-time requirement and/or the precision requirement;
wherein the scene type comprises a weather type and/or a type of geographic location in the image data.
Optionally, the detection module is further configured to select an image detection model from a plurality of pre-established image detection models according to a preset rule based on the scene parameter; and/or
And inputting the scene parameters into a preset classification model, and matching and detecting the model and the model parameters of an image detection model of the image data by the classification model based on the scene parameters.
Optionally, the detection module is further configured to select, based on the scene parameter, an image detection model matching a scene type in the scene parameter from a plurality of pre-established image detection models.
Optionally, the apparatus further comprises: a first construction module configured to construct a classification model;
and collecting scene parameters of different scenes, matching image detection models corresponding to the scenes, and training the classification model by taking the scene parameters of each scene and the corresponding image detection models as the input and the output of the classification model respectively.
Optionally, the apparatus further includes a second construction module configured to pre-construct and train an image detection model corresponding to a plurality of scenes.
Optionally, the detection module is further configured to select a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, call an image detection model configuration file corresponding to the image detection model by the image detection unit, operate the image detection model to perform visual detection on the image data, and output a detection result.
According to another aspect of the present invention, there is also provided a computer storage medium storing computer program code, which when run on a computing device, causes the computing device to execute any one of the above image processing methods in conjunction with in-vehicle idle resources.
The invention provides an image processing method and device combined with vehicle machine idle resources, wherein when the idle resources meeting preset conditions of hardware equipment in a current vehicle machine are checked, a scene detection algorithm is operated in the vehicle machine, and a preset image detection unit is assisted to perform visual detection on image data input to the image detection unit in real time. In the scheme provided by the invention, idle resources of the vehicle machine are mainly utilized to interconnect the vehicle machine and the preset image detection unit, so that an auxiliary image processing function is provided, and an auxiliary is provided for a related detection algorithm on the preset image detection unit, so that the detection precision is improved and the cost is saved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart illustrating an image processing method incorporating vehicle idle resources according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image processing method incorporating vehicle idle resources according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of a model selection process according to a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of an image processing apparatus incorporating vehicle idle resources according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an image processing apparatus incorporating in-vehicle idle resources according to a preferred embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method combined with in-vehicle idle resources according to an embodiment of the present invention, and as can be seen from fig. 1, the image processing method combined with in-vehicle idle resources according to an embodiment of the present invention may include:
step S102, checking the use state parameters of each current hardware device in the vehicle machine at regular time, and acquiring idle resource parameters of the vehicle machine based on the use state parameters. Each hardware device in the in-vehicle device may include a CPU, a memory, a network bandwidth, a hard disk, and the like. When the use state parameters of each current hardware device in the in-vehicle machine are checked, the checking can be performed at fixed time intervals, and the checking can also be performed in real time, specifically, the current network bandwidth, the used storage space of the hard disk, the use state of the memory and the like are mainly obtained, and then idle resource parameters of the in-vehicle machine, such as the processing capacity of the CPU, the memory, the network bandwidth, the hard disk space and the like which are not used under the current condition, are calculated.
And step S104, judging whether the idle resource parameters meet the preset conditions of the vehicle operation appointed scene detection algorithm.
When judging whether the idle resource parameters meet the preset conditions of the vehicle operation appointed scene detection algorithm, whether the hardware resources in the current vehicle, such as CPU processing capacity, memory size, network bandwidth, hard disk space, and the like, meet the requirements corresponding to the preset operation appointed scene detection algorithm can be judged. The preset condition may be set according to different real-time requirements or precision requirements, and the present invention is not limited.
And S106, if the idle resource parameters meet the preset conditions, operating a scene detection algorithm in the vehicle to assist a preset image detection unit in visually detecting image data input to the image detection unit in real time.
The embodiment of the invention provides an image processing method combined with vehicle machine idle resources, which is characterized in that when the idle resources meeting preset conditions of hardware equipment in a current vehicle machine are checked, a scene detection algorithm is operated in the vehicle machine to assist a preset image detection unit to perform visual detection on image data input to the image detection unit in real time. In the solution provided by the embodiment of the present invention, idle resources of a car machine (e.g., IHU, information Head Unit, Infotainment host) are utilized to interconnect the car machine and a preset image detection Unit, so as to provide an auxiliary image processing function, and assist a related detection algorithm on the preset image detection Unit, so as to improve detection accuracy and save cost.
The predetermined image detection unit mentioned in this embodiment may be a proprietary chip or a processor for performing a visual detection algorithm. The visual detection algorithm can be a visual detection algorithm aiming at an automatic parking scene, such as detection of a parking space line, detection of a moving object, detection of a pedestrian, detection of a static obstacle (a wheel gear, an ice cream barrel) and the like; the method can also be a visual detection algorithm aiming at the interactive safety forward-looking camera, such as pedestrian and vehicle detection, lane line detection, traffic light and speed limit sign detection and the like.
In step S106, if the idle resources of the in-vehicle device meet the preset condition of the in-vehicle device running the designated scene detection algorithm, the scene detection algorithm may be run in the in-vehicle device to assist the image detection unit in performing the visual detection. Further, step S106 may include:
s106-1, acquiring image data input to the image detection unit and processed by the image processor, and operating the scene detection algorithm in the vehicle to identify scene parameters of the image data.
In this embodiment, the Image data that needs to be visually detected by the Image detection unit is Image data acquired by an Image acquisition device (e.g., a camera) disposed on a vehicle body, and an Image Processor (ISP) may be disposed in the Image acquisition device to perform post-processing on a Signal output by the front-end Image sensor, and the main functions include linear correction, noise removal, dead pixel removal, interpolation, white balance, automatic exposure control, and the like. The ISP provides guarantee for the imaging effect of the camera under extreme conditions of backlight, illumination change (such as tunnel passing) and the like. For the car machine, when the self condition of the car machine meets the preset condition, the scene parameter of the image data is identified by acquiring the image data processed by the image processor to run the appointed scene detection algorithm, so that the identification result of the scene parameter of the image data can be more accurate.
Optionally, the step S106-1 may further include: acquiring image data input to the image detection unit and processed by an image processor; the scene detection algorithm is operated in the vehicle machine to carry out scene detection on the image data, and the scene type of the image data, the size and the number of the target detection objects in the image data, the real-time requirement and/or the precision requirement are identified; wherein the scene type comprises a weather type and/or a type of geographic location in the image data.
For example, the scene detection algorithm executed by the car machine in the embodiment of the present invention may identify that the scene parameter in the image data may include at least one of the following:
1. weather-related parameters such as rainy day, cloudy day, evening, snow day, light intensity, and the like;
2. geographic location type parameters such as urban roads, rural, high speed environments, etc.;
3. image quality parameters, such as whether an image is blurred or not, image definition, whether a smear exists or not, whether backlight exists or not, whether a reflective area exists in the image or not, and the like;
4. the real-time performance and precision requirements of target detection, such as the number and speed of small object targets in the field of view, the speed of the vehicle and the like, are combined for judgment.
For example, when there are only small objects in the field of view but no close objects, the accuracy requirements are high and the speed requirements are low; when only a near large object exists in the visual field, the requirement on the detection speed is high, but the requirement on the detection precision is low. When the vehicle speed is high, the real-time requirement is high; when the vehicle speed is low, the real-time requirement is low. By combining the vehicle speed and the size of the object in the field of view, a simple machine learning classifier (such as an SVM or adaboost) can be used for outputting the corresponding accuracy and speed requirements, and then a model with proper size and accuracy is selected.
S106-2, selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and outputting a detection result after the image detection unit adopts the image detection model to perform visual detection on the image data.
The image detection model in the embodiment can use different deep learning or machine learning models for different scenes, and the precision of the visual detection algorithm can be higher and the detection result can be more accurate by specially customizing the corresponding models for different scenes. The deep learning model is mainly various convolutional neural networks for realizing target detection, tracking and semantic segmentation, wherein SSD, Yolo, deep Lab and the like are commonly used, and main customization operations comprise picture size adaptation, output class adaptation, labeled data distribution arrangement, training parameter setting and adjustment and the like.
Therefore, image detection models corresponding to a variety of scenes can be constructed and trained in advance. After the corresponding image detection model is selected from the pre-established multiple image detection models based on the scene parameters in step S106-2, the image detection unit may call an image detection model configuration file corresponding to the image detection model, operate the image detection model to perform visual detection on the image data, and output a detection result.
In practical application, when a scene detection algorithm based on vehicle operation detects a scene of current image data, a corresponding image detection model is selected for detection according to current scenes (which can be a combination of multiple categories), speed and precision requirements. When the scene of continuous image data changes, the switching of the image detection model only needs to call different image detection model configuration files and parameters in a main program; the image detection models and corresponding parameters under various scenes are trained in advance and prestored in the equipment.
In the selection of the image detection model, the image detection model can be selected from a plurality of pre-established image detection models according to a preset rule based on the scene parameters; and/or inputting the scene parameters into a preset classification model, and matching the model and the model parameters of the image detection model for detecting the image data by the classification model based on the scene parameters.
When the image detection model is selected according to the preset rule, the image detection model matched with the scene type in the scene parameters can be selected from a plurality of pre-established image detection models based on the scene parameters. In brief, assuming that the detection is rainy, a model specially trained and set for the rainy day is used for detection; when the image quality is detected to be fuzzy, a model aiming at the fuzzy scene picture is used for detection, and the situations such as backlight, smear and the like are also the same.
For relatively complex scenes, scene parameters may be input into a preset classification model, and the classification model matches a model and model parameters of an image detection model for detecting the image data based on the scene parameters. Before that, a classification model can be constructed; and collecting scene parameters of different scenes, matching image detection models corresponding to the scenes, and training the classification model by taking the scene parameters of each scene and the corresponding image detection models as the input and the output of the classification model respectively. The customized model provided by the embodiment may be various, for example, if snow day + high speed + smear + no small target in the field of view is detected, the model customized for the combination is used for detection.
Fig. 2 is a schematic flow chart illustrating an image processing method combined with in-vehicle idle resources according to a preferred embodiment of the present invention, and as can be seen from fig. 2, the method provided in this embodiment may include:
s1, the vehicle end periodically detects whether the idle resources of the vehicle are enough, namely whether the idle resources meet the preset conditions for operating the execution scene detection algorithm; if yes, go to step S202; if not, the detection is continuously and repeatedly performed, and the interval time between the two detections can be determined according to the real-time performance of the scene detection, for example, the scene detection function can be performed once at intervals of a certain preset time (30 seconds or one minute, according to the automatic adjustment of the vehicle speed, if the vehicle speed is high, the preset time is short, and if the vehicle speed is low, the preset time is long) at the vehicle end, or at intervals of other times, which is not limited in the present invention.
And S2, operating a scene detection algorithm in the vehicle.
In this step, the object of the scene detection algorithm is image data processed by the ISP in the image detection unit after the image data is output from the camera, and if it is assumed that one or more frames of image data are subjected to scene detection, the image data output by the ISP may be obtained in a wireless connection manner or transmitted to the vehicle end by the ISP for scene detection.
S3, when the car machine executes the scene detection algorithm based on the image data output by the ISP, as shown in fig. 3, the parameters such as the scene detection result (i.e. the scene type of the image data), the real-time requirement, and the precision requirement may be data-matched with the existing multiple pre-constructed image detection models, so as to select an image detection model (model x in fig. 3) matched with the current parameters from the multiple scene detection models, or the parameters such as the scene detection result, the real-time requirement, and the precision requirement are input into the pre-constructed classification model, the finally selected image detection model is output by the classification model, and then the type of the selected image detection model is transmitted to the image detection unit.
Supposing that the scene type corresponding to the detected image data is snowy and the real-time requirement and the precision requirement are both low, a model specially trained and set for the snowy can be used for detection; if the detected image is rainy + high speed + only a near large object in the visual field, the detected image can be input into the classification model at the moment due to more parameters, and the classification model selects an image detection model for performing visual detection corresponding to the image data.
S4, the image detection unit performs a visual detection on the current image data using the selected model x to assist in automatic parking or safe driving.
In the running process of the vehicle machine, the embodiment of the invention can switch the model, the size and the precision of the model according to the scene detection result, thereby achieving the effect of improving the final detection precision.
Based on the same inventive concept, an embodiment of the present invention further provides an image processing apparatus combined with in-vehicle idle resources, and as shown in fig. 4, the image processing apparatus combined with in-vehicle idle resources provided in the embodiment of the present invention may include:
the checking module 410 is configured to regularly check the use state parameters of each current hardware device in the in-vehicle machine, and obtain idle resource parameters of the in-vehicle machine based on the use state parameters;
a determining module 420, configured to determine whether the idle resource parameter meets a preset condition of the vehicle running specified scene detection algorithm;
the detection module 430 is configured to run the scene detection algorithm in the in-vehicle device if the idle resource parameter meets the preset condition, and assist a preset image detection unit in performing visual detection on image data input to the image detection unit in real time.
In an optional embodiment of the present invention, the detection module 430 is further configured to obtain image data input to the image detection unit and processed by an image processor, and operate the scene detection algorithm in the in-vehicle device to identify scene parameters of the image data;
selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and outputting a detection result after the image detection unit adopts the image detection model to perform visual detection on the image data.
In an optional embodiment of the present invention, the detection module 430 is further configured to obtain image data processed by the image processor and input to the image detection unit;
running the scene detection algorithm in the vehicle machine to perform scene detection on the image data, and identifying the scene type of the image data, the size and the number of target detection objects in the image data, the real-time requirement and/or the precision requirement;
wherein the scene type comprises a weather type and/or a type of geographic location in the image data.
In an optional embodiment of the present invention, the detecting module 430 is further configured to select an image detection model from a plurality of pre-established image detection models according to a preset rule based on the scene parameter; and/or
And inputting the scene parameters into a preset classification model, and matching and detecting the model and the model parameters of an image detection model of the image data by the classification model based on the scene parameters.
In an optional embodiment of the present invention, the detecting module 430 is further configured to select, based on the scene parameter, an image detection model matching a scene type in the scene parameter from a plurality of pre-established image detection models.
In an alternative embodiment of the present invention, as shown in fig. 5, the apparatus further includes: a first construction module 440 configured to construct a classification model;
and collecting scene parameters of different scenes, matching image detection models corresponding to the scenes, and training the classification model by taking the scene parameters of each scene and the corresponding image detection models as the input and the output of the classification model respectively.
In an alternative embodiment of the present invention, as shown in fig. 5, the apparatus further includes a second constructing module 450 configured to pre-construct and train image detection models corresponding to a plurality of scenes.
In an optional embodiment of the present invention, the detection module 430 is further configured to select a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, the image detection unit calls an image detection model configuration file corresponding to the image detection model, and the image detection model is operated to perform visual detection on the image data, and then a detection result is output.
Based on the same inventive concept, an embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer program codes, and when the computer program codes are run on a computing device, the computing device is caused to execute the image processing method combined with in-vehicle idle resources according to any of the foregoing embodiments.
The embodiment of the invention provides an image processing method combined with vehicle machine idle resources, which is characterized in that when the idle resources meeting preset conditions of hardware equipment in a current vehicle machine are checked, a scene detection algorithm is operated in the vehicle machine to assist a preset image detection unit to perform visual detection on image data input to the image detection unit in real time. In the scheme provided by the embodiment of the invention, idle resources of the vehicle machine are utilized to interconnect the vehicle machine and the preset image detection unit, an auxiliary image processing function is provided, and assistance is provided for a related detection algorithm on the preset image detection unit, so that the detection precision is improved and the cost is saved.
In addition, the embodiment of the invention can also switch the model, the size and the precision of the model according to the scene detection result, thereby achieving the effect of improving the final detection precision.
It is clear to those skilled in the art that the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and for the sake of brevity, further description is omitted here.
In addition, the functional units in the embodiments of the present invention may be physically independent of each other, two or more functional units may be integrated together, or all the functional units may be integrated in one processing unit. The integrated functional units may be implemented in the form of hardware, or in the form of software or firmware.
Those of ordinary skill in the art will understand that: the integrated functional units, if implemented in software and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computing device (e.g., a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention when the instructions are executed. And the aforementioned storage medium includes: u disk, removable hard disk, Read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disk, and other various media capable of storing program code.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (such as a computing device, e.g., a personal computer, a server, or a network device) associated with program instructions, which may be stored in a computer-readable storage medium, and when the program instructions are executed by a processor of the computing device, the computing device executes all or part of the steps of the method according to the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments can be modified or some or all of the technical features can be equivalently replaced within the spirit and principle of the present invention; such modifications or substitutions do not depart from the scope of the present invention.

Claims (8)

1. An image processing method combined with vehicle machine idle resources comprises the following steps:
checking the use state parameters of current hardware equipment in a vehicle machine at regular time, and acquiring idle resource parameters of the vehicle machine based on the use state parameters;
judging whether the idle resource parameters meet preset conditions of a vehicle machine running appointed scene detection algorithm or not;
if the idle resource parameters meet the preset conditions, the scene detection algorithm is operated in the car machine, and a preset image detection unit is assisted to perform visual detection on image data input to the image detection unit in real time; wherein
The running of the scene detection algorithm in the car machine assists the preset image detection unit to perform visual detection on the image data input to the image detection unit in real time, and comprises the following steps:
acquiring image data input to the image detection unit and processed by an image processor, and operating the scene detection algorithm in the vehicle to identify scene parameters of the image data;
selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and outputting a detection result after the image detection unit adopts the image detection model to perform visual detection on the image data;
the acquiring the image data input to the image detection unit and processed by the image processor, and operating the scene detection algorithm in the in-vehicle device to identify the scene parameters of the image data includes:
acquiring image data input to the image detection unit and processed by an image processor;
running the scene detection algorithm in the vehicle machine to perform scene detection on the image data, and identifying the scene type of the image data, the size and the number of target detection objects in the image data, the real-time requirement and/or the precision requirement;
wherein the scene type comprises a weather type and/or a type of geographic location in the image data.
2. The method of claim 1, wherein the selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameter comprises:
selecting an image detection model from a plurality of pre-established image detection models according to a preset rule based on the scene parameters; and/or
And inputting the scene parameters into a preset classification model, and matching and detecting the model and the model parameters of an image detection model of the image data by the classification model based on the scene parameters.
3. The method according to claim 2, wherein the selecting an image detection model from a plurality of pre-established image detection models according to a preset rule based on the scene parameter comprises:
and selecting an image detection model matched with the scene type in the scene parameters from a plurality of pre-established image detection models based on the scene parameters.
4. The method of claim 2, wherein the inputting the scene parameters into a preset classification model, and the matching by the classification model of the model and the model parameters of the image detection model for detecting the image data based on the scene parameters further comprises:
constructing a classification model;
and collecting scene parameters of different scenes, matching image detection models corresponding to the scenes, and training the classification model by taking the scene parameters of each scene and the corresponding image detection models as the input and the output of the classification model respectively.
5. The method according to any one of claims 1-4, wherein before selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameter, the method further comprises:
and pre-constructing and training an image detection model corresponding to a plurality of scenes.
6. The method according to claim 5, wherein the selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and outputting a detection result after the image detection unit performs visual detection on the image data by using the image detection model comprises:
selecting a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, calling an image detection model configuration file corresponding to the image detection model by the image detection unit, operating the image detection model to perform visual detection on the image data, and outputting a detection result.
7. An image processing apparatus incorporating in-vehicle idle resources, comprising:
the checking module is configured to regularly check the use state parameters of each current hardware device in the vehicle machine and acquire the idle resource parameters of the vehicle machine based on the use state parameters;
the judging module is configured to judge whether the idle resource parameters meet preset conditions of the vehicle machine running appointed scene detection algorithm;
the detection module is configured to operate the scene detection algorithm in the vehicle machine if the idle resource parameters meet the preset conditions, and assist a preset image detection unit in visually detecting image data input to the image detection unit in real time; wherein
The detection module is further configured to, if the idle resource parameter meets the preset condition, acquire image data input to the image detection unit and processed by an image processor, run the scene detection algorithm in the in-vehicle device to perform scene detection on the image data, recognize that scene parameters of the image data include a scene type of the image data, a size and a number of target detection objects in the image data, a real-time requirement and/or a precision requirement, select a corresponding image detection model from a plurality of pre-established image detection models based on the scene parameters, and output a detection result after the image detection unit performs visual detection on the image data by using the image detection model;
wherein the scene type comprises a weather type and/or a type of geographic location in the image data.
8. A computer storage medium storing computer program code which, when run on a computing device, causes the computing device to perform the image processing method in conjunction with in-vehicle idle resources of any of claims 1-6.
CN201910562835.9A 2019-06-26 2019-06-26 Image processing method and device combined with vehicle machine idle resources Active CN110276322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910562835.9A CN110276322B (en) 2019-06-26 2019-06-26 Image processing method and device combined with vehicle machine idle resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910562835.9A CN110276322B (en) 2019-06-26 2019-06-26 Image processing method and device combined with vehicle machine idle resources

Publications (2)

Publication Number Publication Date
CN110276322A CN110276322A (en) 2019-09-24
CN110276322B true CN110276322B (en) 2022-01-07

Family

ID=67963315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910562835.9A Active CN110276322B (en) 2019-06-26 2019-06-26 Image processing method and device combined with vehicle machine idle resources

Country Status (1)

Country Link
CN (1) CN110276322B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105390021A (en) * 2015-11-16 2016-03-09 北京蓝卡科技股份有限公司 Parking spot state detection method and parking spot state detection device
EP2743861A3 (en) * 2012-12-12 2016-05-11 Ricoh Company, Ltd. Method and device for detecting continuous object in disparity direction based on disparity map
CN106791475A (en) * 2017-01-23 2017-05-31 上海兴芯微电子科技有限公司 Exposure adjustment method and the vehicle mounted imaging apparatus being applicable
CN107563256A (en) * 2016-06-30 2018-01-09 北京旷视科技有限公司 Aid in driving information production method and device, DAS (Driver Assistant System)
CN107944375A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Automatic Pilot processing method and processing device based on scene cut, computing device
CN108874615A (en) * 2017-05-16 2018-11-23 惠州市德赛西威汽车电子股份有限公司 A kind of in-vehicle multi-media system device for detecting performance and detection method
CN109074069A (en) * 2016-03-29 2018-12-21 智动科技有限公司 Autonomous vehicle with improved vision-based detection ability
CN109712431A (en) * 2017-10-26 2019-05-03 丰田自动车株式会社 Drive assistance device and driving assistance system
CN109918977A (en) * 2017-12-13 2019-06-21 华为技术有限公司 Determine the method, device and equipment of free time parking stall

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366386A (en) * 2013-07-14 2013-10-23 西安电子科技大学 Parallel image uncompressing system based on multiple processes and multiple threads
CN107563512B (en) * 2017-08-24 2023-10-17 腾讯科技(上海)有限公司 Data processing method, device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2743861A3 (en) * 2012-12-12 2016-05-11 Ricoh Company, Ltd. Method and device for detecting continuous object in disparity direction based on disparity map
CN105390021A (en) * 2015-11-16 2016-03-09 北京蓝卡科技股份有限公司 Parking spot state detection method and parking spot state detection device
CN109074069A (en) * 2016-03-29 2018-12-21 智动科技有限公司 Autonomous vehicle with improved vision-based detection ability
CN107563256A (en) * 2016-06-30 2018-01-09 北京旷视科技有限公司 Aid in driving information production method and device, DAS (Driver Assistant System)
CN106791475A (en) * 2017-01-23 2017-05-31 上海兴芯微电子科技有限公司 Exposure adjustment method and the vehicle mounted imaging apparatus being applicable
CN108874615A (en) * 2017-05-16 2018-11-23 惠州市德赛西威汽车电子股份有限公司 A kind of in-vehicle multi-media system device for detecting performance and detection method
CN109712431A (en) * 2017-10-26 2019-05-03 丰田自动车株式会社 Drive assistance device and driving assistance system
CN107944375A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Automatic Pilot processing method and processing device based on scene cut, computing device
CN109918977A (en) * 2017-12-13 2019-06-21 华为技术有限公司 Determine the method, device and equipment of free time parking stall

Also Published As

Publication number Publication date
CN110276322A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
CN108875603B (en) Intelligent driving control method and device based on lane line and electronic equipment
US11318928B2 (en) Vehicular automated parking system
US10229332B2 (en) Method and apparatus for recognizing obstacle of vehicle
US20210213961A1 (en) Driving scene understanding
US11453335B2 (en) Intelligent ultrasonic system and rear collision warning apparatus for vehicle
US11586856B2 (en) Object recognition device, object recognition method, and object recognition program
Devi et al. A comprehensive survey on autonomous driving cars: A perspective view
CN112793567A (en) Driving assistance method and system based on road condition detection
CN116634638A (en) Light control strategy generation method, light control method and related device
CN112455465B (en) Driving environment sensing method and device, electronic equipment and storage medium
CN110837760A (en) Target detection method, training method and device for target detection
CN110276322B (en) Image processing method and device combined with vehicle machine idle resources
CN110177222B (en) Camera exposure parameter adjusting method and device combining idle resources of vehicle machine
CN116434156A (en) Target detection method, storage medium, road side equipment and automatic driving system
WO2023230740A1 (en) Abnormal driving behavior identification method and device and vehicle
CN115481724A (en) Method for training neural networks for semantic image segmentation
TW201535323A (en) System and method for image defogging, system and method for driving assistance
CN116433712A (en) Fusion tracking method and device based on pre-fusion of multi-sensor time sequence sensing results
CN113887284A (en) Target object speed detection method, device, equipment and readable storage medium
CN114911813B (en) Updating method and device of vehicle-mounted perception model, electronic equipment and storage medium
JP2020534600A (en) Lane recognition methods and devices, driver assistance systems and vehicles
US20240144701A1 (en) Determining lanes from drivable area
WO2022193154A1 (en) Windshield wiper control method, automobile, and computer-readable storage medium
US20220309799A1 (en) Method for Automatically Executing a Vehicle Function, Method for Evaluating a Computer Vision Method and Evaluation Circuit for a Vehicle
Andika et al. Improved feature extraction network in lightweight YOLOv7 model for real-time vehicle detection on low-cost hardware

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220323

Address after: 430051 No. b1336, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Wuhan, Hubei Province

Patentee after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: No.c101, chuanggu start up area, taizihu cultural Digital Industrial Park, No.18 Shenlong Avenue, Wuhan Economic Development Zone, Hubei Province

Patentee before: HUBEI ECARX TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right