CN113077378B - Image processing and target identification method based on vehicle-mounted camera - Google Patents
Image processing and target identification method based on vehicle-mounted camera Download PDFInfo
- Publication number
- CN113077378B CN113077378B CN202110351792.7A CN202110351792A CN113077378B CN 113077378 B CN113077378 B CN 113077378B CN 202110351792 A CN202110351792 A CN 202110351792A CN 113077378 B CN113077378 B CN 113077378B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- calculation
- main controller
- target recognition
- image processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000006243 chemical reaction Methods 0.000 claims abstract description 16
- 206010034960 Photophobia Diseases 0.000 claims abstract description 13
- 208000013469 light sensitivity Diseases 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims description 211
- 230000035945 sensitivity Effects 0.000 claims description 5
- 206010034972 Photosensitivity reaction Diseases 0.000 claims description 2
- 230000036211 photosensitivity Effects 0.000 claims description 2
- 239000000284 extract Substances 0.000 abstract description 3
- 230000010354 integration Effects 0.000 abstract description 3
- 238000004891 communication Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image processing and target recognition method based on a vehicle-mounted camera, wherein a system adopted by the method comprises the vehicle-mounted camera, an image processing module and a target recognition module, wherein the vehicle-mounted camera acquires an original image and sends the original image to the image processing module; the image processing module receives an original image, extracts resolution, focal length, color depth, light sensitivity and exposure value of the original image, replaces the resolution, focal length, color depth, light sensitivity and exposure value of the original image with preset resolution, preset focal length, preset color depth, preset light sensitivity and preset exposure value, generates a new image, then judges whether the new image is successfully generated, judges that data conversion is successful if so, and sends the new image to the target recognition module, otherwise judges that the data conversion is failed; and the target recognition module performs target recognition after receiving the new image and stores a target recognition result. The invention can improve the integration level and the expandability of the system.
Description
Technical Field
The invention belongs to the field of intelligent network automobiles, and particularly relates to an image processing and target identification method based on a vehicle-mounted camera.
Background
The intelligent network-connected automobile, namely ICV (Intelligent Connected Vehicle), refers to the organic combination of the Internet of vehicles and intelligent vehicles, is a device carrying advanced vehicle-mounted sensors, controllers, actuators and the like, and is a multi-sensor complex system. The intelligent information exchange and sharing of the vehicle, people, vehicles, roads, background and the like can be realized by integrating modern communication and network technologies, safe, comfortable, energy-saving and efficient running can be realized, and finally, the intelligent information exchange and sharing can replace a new-generation vehicle operated by people.
With the development of electronic, informationized and artificial intelligence technologies, miniaturized and embedded visual sensors are widely applied, cameras are widely used for research and development and testing of intelligent network buses, and people expect to obtain more intelligent information from the vehicle cameras, namely, the driving environment is perceived through the visual field of the cameras. The visual sensor (camera) of the intelligent network-connected automobile can realize the functions of lane departure warning, front collision early warning, pedestrian collision early warning, traffic sign recognition, blind spot monitoring, driver attention monitoring, panoramic parking, parking assistance, lane keeping assistance and the like. In the industrial chain of the vehicle-mounted camera, the camera and the image processing algorithm form a vehicle-mounted camera solution, and the image processing algorithm aiming at signals output by different cameras is initiated. The number of the vehicle-mounted cameras on the intelligent network-connected automobile is large, and the resolution and color depth of the shot images are different due to the fact that the cameras are different (manufacturers and models); in addition, the differences of the ambient light sources and the focusing differences can lead the sensitivity, the exposure value and the focal length of the shot images to be different; the conventional processing is carried out on the images, and the calculation amount is large and the expandability is low when the target recognition is carried out.
Disclosure of Invention
The invention aims to provide an image processing and target recognition method based on a vehicle-mounted camera so as to improve the integration level and expandability of a system.
The invention relates to an image processing and target recognition method based on a vehicle-mounted camera, which adopts a system comprising the vehicle-mounted camera, an image processing module and a target recognition module, wherein the vehicle-mounted camera acquires an original image and sends the original image to the image processing module; the image processing module receives the original image and then performs the following processing:
extracting resolution, focal length, color depth, photosensitivity and exposure value of an original image;
replacing the resolution of the original image with a preset resolution, replacing the focal length of the original image with a preset focal length, replacing the color depth of the original image with a preset color depth, replacing the light sensitivity of the original image with a preset light sensitivity, replacing the exposure value of the original image with a preset exposure value, and generating a new image according to the preset resolution, the preset focal length, the preset color depth, the preset light sensitivity and the preset exposure value; the preset resolution, the preset focal length, the preset color depth, the preset sensitivity and the preset exposure value are stored standard camera parameter values;
judging whether the new image is successfully generated, if so, judging that the data conversion is successful, and sending the new image to a target identification module, otherwise, judging that the data conversion is failed;
and the target recognition module performs target recognition after receiving the new image and stores a target recognition result.
Preferably, after the image processing module judges that the data conversion fails, the original image and the data conversion failure information are uploaded to the cloud platform; after the target recognition module stores the target recognition result, the target recognition result is displayed through a display screen, and the new image and the target recognition result are uploaded to the cloud platform.
The image processing module consists of a vehicle-mounted main controller I and i vehicle-mounted auxiliary controllers I.
If the calculation power of the vehicle-mounted main controller I is enough, the vehicle-mounted main controller I executes image processing to generate a new image.
If the calculation power of the vehicle-mounted main controller I is insufficient, the vehicle-mounted main controller I sends an image processing calculation task request to the i vehicle-mounted auxiliary controllers I, the i vehicle-mounted auxiliary controllers I feed respective calculation power values back to the vehicle-mounted main controller I after receiving the image processing calculation task request, the vehicle-mounted main controller I sorts the corresponding i vehicle-mounted auxiliary controllers I according to the calculation power values after receiving the i calculation power values, the vehicle-mounted main controller I equally divides the image processing calculation task into i parts, and sequentially sends the i parts of image processing calculation tasks to the corresponding i vehicle-mounted auxiliary controllers I according to the order of the calculation power values from large to small, the i vehicle-mounted auxiliary controllers I calculate according to the received image processing calculation task, the calculation result is returned to the vehicle-mounted main controller I after the calculation is completed, and the i calculation results are combined and tidied to generate a new image.
If the vehicle-mounted main controller I receives the calculation results returned by the i vehicle-mounted auxiliary controllers I within the preset time, the vehicle-mounted main controller I carries out combination and arrangement on the calculation results returned by the i vehicle-mounted auxiliary controllers I to generate a new image.
If the vehicle-mounted main controller I only receives i-k in the preset time 1 Calculation result returned by the vehicle-mounted auxiliary controller I, and k 1 If i/2 is not higher than the preset value, the vehicle-mounted main controller I does not return k of the calculation result 1 The image processing calculation task of the vehicle-mounted auxiliary controller I is issued to k which returns calculation results first 1 The vehicle-mounted auxiliary controller I firstly returns k of calculation results 1 And the vehicle-mounted auxiliary controller I carries out calculation according to the received image processing calculation task, the calculation result is returned to the vehicle-mounted main controller I after the calculation is completed, and the vehicle-mounted main controller I carries out combination and arrangement on the i calculation results to generate a new image.
If the vehicle-mounted main controller I only receives i-k in the preset time 1 Calculation result returned by the vehicle-mounted auxiliary controller I, and k 1 >i/2, the vehicle-mounted main controller I will not return k of the calculation result 1 Image processing calculation task combination of the vehicle-mounted auxiliary controller I is divided into i-k on average 1 Parts, then i-k 1 Issuing the image processing calculation task to i-k returning calculation result 1 The vehicle-mounted auxiliary controller I returns the i-k of the calculation result 1 And the vehicle-mounted auxiliary controller I carries out calculation according to the received image processing calculation task, the calculation result is returned to the vehicle-mounted main controller I after the calculation is completed, and the vehicle-mounted main controller I carries out combination and arrangement on the i calculation results to generate a new image.
Preferably, the target recognition module consists of a vehicle-mounted main controller II and j vehicle-mounted auxiliary controllers II.
If the calculation power of the vehicle-mounted main controller II is enough, the vehicle-mounted main controller II executes target recognition to generate a target recognition result.
If the calculation power of the vehicle-mounted main controller II is insufficient, the vehicle-mounted main controller II sends a target recognition calculation task request to j vehicle-mounted auxiliary controllers II, j vehicle-mounted auxiliary controllers II receive the target recognition calculation task request and feed respective calculation power values back to the vehicle-mounted main controller II, the vehicle-mounted main controller II sequences the corresponding j vehicle-mounted auxiliary controllers II according to the calculation power values after receiving the j calculation power values, the vehicle-mounted main controller II averagely divides the target recognition calculation task into j parts, and sequentially sends the j target recognition calculation tasks to the corresponding j vehicle-mounted auxiliary controllers II according to the order of the calculation power values from large to small, the j vehicle-mounted auxiliary controllers II calculate according to the received target recognition calculation tasks, the calculation results are returned to the vehicle-mounted main controller II after the calculation is completed, and the j calculation results are combined and arranged after the vehicle-mounted main controller II receives the j calculation results, so that the target recognition results are generated.
If the vehicle-mounted main controller II receives calculation results returned by the j vehicle-mounted auxiliary controllers II within a preset time, the vehicle-mounted main controller II carries out combination and arrangement on the calculation results returned by the j vehicle-mounted auxiliary controllers II to generate a target identification result.
If the vehicle-mounted main controller II only receives j-k in preset time 2 Calculating result returned by the vehicle-mounted auxiliary controller II, and k 2 J/2, the vehicle-mounted main controller II will not return k of the calculation result 2 K for issuing target identification calculation task of vehicle-mounted auxiliary controller II to first return calculation result 2 The vehicle-mounted auxiliary controller II firstly returns k of the calculation result 2 And the vehicle-mounted auxiliary controller II calculates according to the received target recognition calculation task, returns the calculation result to the vehicle-mounted main controller II after the calculation is completed, and combines and sorts j calculation results by the vehicle-mounted main controller II to generate a target recognition result.
If the vehicle-mounted main controller II only receives j-k in preset time 2 Calculating result returned by the vehicle-mounted auxiliary controller II, and k 2 >j/2, the vehicle-mounted main controller II will not return k of the calculation result 2 Target recognition calculation task combination of the vehicle-mounted auxiliary controller II is divided into j-k on average 2 Portion, then j-k 2 Issuing the image processing calculation task to j-k returning calculation result 2 The vehicle-mounted auxiliary controller II returns j-k of the calculation result 2 The personal vehicle-mounted auxiliary controller II calculates task according to the received target identificationAnd (3) carrying out calculation, returning calculation results to the vehicle-mounted main controller II after the calculation is completed, and combining and sorting j calculation results by the vehicle-mounted main controller II to generate a target identification result.
The invention has the following effects:
(1) The method can convert the original images acquired by different vehicle-mounted cameras into new images in a unified format for outputting, is used for subsequent target recognition, improves the system integration level and the expandability, does not need to carry out large-scale reconstruction on the vehicle-mounted cameras, and is low in manufacturing cost.
(2) From actual conditions, under the condition that the calculation forces of the vehicle-mounted main controller I and the vehicle-mounted main controller II are insufficient, the i vehicle-mounted auxiliary controllers I and the j vehicle-mounted auxiliary controllers II are used for assisting calculation, so that the calculation efficiency is improved, and the clamping stagnation in the calculation process is avoided.
Drawings
Fig. 1 is a schematic block diagram of a system adopted by an image processing and target recognition method based on a vehicle-mounted camera in this embodiment.
Fig. 2 is a flowchart of an image processing and target recognition method based on a vehicle-mounted camera in this embodiment.
Fig. 3 is a flowchart of task allocation performed by the image processing module in the present embodiment.
Fig. 4 is a flow chart of task allocation for performing object recognition by the object recognition module in this embodiment.
Detailed Description
The image processing and target recognition method based on the vehicle-mounted camera as shown in fig. 1 to 4 is applied to an intelligent network-connected automobile, and the system adopted by the method comprises a vehicle-mounted camera 1, an image processing module 2 and a target recognition module 3, wherein the vehicle-mounted camera 1 is connected with the image processing module 2, and the image processing module 2 is connected with the target recognition module 3. The image processing and target identification method specifically comprises the following steps:
step one, the vehicle-mounted camera 1 collects an original image (such as an information image of a road and an obstacle in front of a vehicle) and sends the original image to the image processing module 2, and then step two is executed.
And step two, the image processing module 2 extracts the resolution R ', the focal length F ', the color depth C ', the light sensitivity ISO ' and the exposure value EV ' of the original image after receiving the original image, and then the step three is executed.
The image processing module 2 replaces the resolution R ' of the original image with a preset resolution R, the focal length F ' of the original image with a preset focal length F, the color depth C ' of the original image with a preset color depth C, the light sensitivity ISO ' of the original image with a preset light sensitivity ISO, the exposure value EV ' of the original image with a preset exposure value EV, and a new image is generated according to the preset resolution R, the preset focal length F, the preset color depth C, the preset light sensitivity ISO and the preset exposure value EV; then executing the fourth step; the preset resolution R, the preset focal length F, the preset color depth C, the preset sensitivity ISO and the preset exposure value EV are stored standard camera parameter values, and specific values of R, F, C, ISO and EV are personalized by different automobile companies.
And step four, the image processing module 2 judges whether the new image is successfully generated, if so (namely, the new image is successfully generated), the step five is executed, and if not, the step six is executed (namely, the new image is not successfully generated).
And step five, the image processing module 2 judges that the data conversion is successful, and sends the new image to the target recognition module 3, and then step seven is executed.
And step six, the image processing module 2 judges that the data conversion fails, and uploads the original image and the data conversion failure information to the cloud platform 4, and then ends.
And step seven, the target recognition module 3 carries out target recognition after receiving the new image, stores a target recognition result, uploads the new image and the target recognition result to the cloud platform 4, displays the target recognition result through the display screen 5, and then ends.
By the method, the conditions of the surrounding environment and the road of the vehicle can be detected in real time, and the driving environment analysis is facilitated.
The image processing module 2 consists of a vehicle-mounted main controller I and i vehicle-mounted auxiliary controllers I. As shown in fig. 3, the image processing module 2 performs the task allocation of image processing as follows:
s1, the vehicle-mounted main controller I judges whether the calculation power is enough or not, if so, the vehicle-mounted main controller I executes image processing to generate a new image, and then the process is finished, otherwise, the process is executed S2.
S2, the vehicle-mounted main controller I sends an image processing calculation task request to the i vehicle-mounted auxiliary controllers I, and then S3 is executed.
S3, after the i vehicle-mounted auxiliary controllers I receive the image processing calculation task request, feeding back respective calculation force values to the vehicle-mounted main controller I, and then executing S4; wherein the force value C is calculated n =α*U n +β*R n ,C n Representing the calculated power value of the nth vehicle-mounted auxiliary controller I, U n CPU value of nth vehicle-mounted auxiliary controller I, R n The memory value of the nth vehicle-mounted auxiliary controller I is represented, alpha represents the weight of the CPU, beta represents the weight of the memory, and is a fixed value, n is an integer, and n is more than or equal to 1 and less than or equal to i.
S4, after receiving the i calculated force values, the vehicle-mounted main controller I sorts the corresponding i vehicle-mounted auxiliary controllers I according to the calculated force values, and then S5 is executed;
s5, the vehicle-mounted main controller I averagely divides the image processing calculation tasks into i parts, sequentially distributes the i parts of image processing calculation tasks to the corresponding i vehicle-mounted auxiliary controllers I according to the sequence of the calculation values from large to small, and then executes S6;
s6, i vehicle-mounted auxiliary controllers I calculate according to the received image processing calculation task, the calculation result is returned to the vehicle-mounted main controller I after the calculation is completed, and then S7 is executed;
s7, the vehicle-mounted main controller I judges whether calculation results returned by the i vehicle-mounted auxiliary controllers I are received in preset time, if so, S8 is executed, and otherwise S9 is executed;
s8, the vehicle-mounted main controller I carries out combination and arrangement on calculation results returned by the i vehicle-mounted auxiliary controllers I to generate a new image, and then the new image is ended;
s9, the vehicle-mounted main controller I judges whether only the i-k is received within the preset time 1 Calculation result returned by the vehicle-mounted auxiliary controller I, and k 1 I/2. Ltoreq.i, if so, executeS10, otherwise, executing S12;
s10, the vehicle-mounted main controller I does not return k of calculation results 1 The image processing calculation tasks of the vehicle-mounted auxiliary controllers I are sequentially issued to k of the calculation results returned first according to the sequence of the calculation results returned 1 The vehicle-mounted auxiliary controller I is executed, and then S11 is executed;
s11, firstly returning k of the calculation result 1 The personal vehicle-mounted auxiliary controller I carries out calculation according to the received image processing calculation task, returns a calculation result to the vehicle-mounted main controller I after the calculation is completed, and then executes S14;
s12, the vehicle-mounted main controller I does not return k of calculation results 1 Image processing calculation task combination of the vehicle-mounted auxiliary controller I is divided into i-k on average 1 The parts are then divided into i-k according to the sequence of the returned calculation results 1 The image processing calculation tasks are sequentially issued to i-k returning calculation results 1 The vehicle-mounted auxiliary controller I is executed, and then S13 is executed;
s13, returning the i-k of the calculation result 1 The personal vehicle-mounted auxiliary controller I carries out calculation according to the received image processing calculation task, returns a calculation result to the vehicle-mounted main controller I after the calculation is completed, and then executes S14;
s14, the vehicle-mounted main controller I carries out combination and arrangement on the i calculation results to generate a new image, and then the process is finished.
The target recognition module 3 consists of a vehicle-mounted main controller II and j vehicle-mounted auxiliary controllers II. As shown in fig. 4, the task allocation manner in which the object recognition module 3 performs object recognition is as follows:
p1, the vehicle-mounted main controller II judges whether the calculation power is enough or not, if so, the vehicle-mounted main controller II executes target recognition to generate a target recognition result, and then the process is finished, otherwise, the process is executed P2.
And P2, the vehicle-mounted main controller II sends a target identification calculation task request to j vehicle-mounted auxiliary controllers II, and then P3 is executed.
After receiving the target recognition calculation task request, the P3 and j vehicle-mounted auxiliary controllers II feed back respective calculation force values to the vehicle-mounted main controller II, and then execute P4The method comprises the steps of carrying out a first treatment on the surface of the Wherein the force value C is calculated m =α*U m +β*R m ,C m Representing calculated power value of mth vehicle-mounted auxiliary controller II, U m CPU value of mth vehicle-mounted auxiliary controller II, R m The memory value of the mth vehicle-mounted auxiliary controller II is represented, alpha represents the weight of the CPU, beta represents the weight of the memory, and m is an integer which is more than or equal to 1 and less than or equal to j.
And P4, after receiving the j calculated force values, the vehicle-mounted main controller II sorts the corresponding j vehicle-mounted auxiliary controllers II according to the calculated force values, and then P5 is executed.
P5, the vehicle-mounted main controller II averagely divides the target recognition calculation tasks into j parts, sequentially distributes the j parts of target recognition calculation tasks to the corresponding j vehicle-mounted auxiliary controllers II according to the sequence of the calculation values from large to small, and then executes P6;
p6, j vehicle-mounted auxiliary controllers II calculate according to the received target identification calculation task, the calculation result is returned to the vehicle-mounted main controller II after calculation is completed, and then P7 is executed;
p7, the vehicle-mounted main controller II judges whether calculation results returned by the j vehicle-mounted auxiliary controllers II are received in preset time, if yes, P8 is executed, and if not, P9 is executed;
p8, the vehicle-mounted main controller II carries out combination and arrangement on calculation results returned by the j vehicle-mounted auxiliary controllers II to generate a target identification result, and then the process is finished;
p9, the vehicle-mounted main controller II only receives j-k in a preset time 2 Calculating result returned by the vehicle-mounted auxiliary controller II, and k 2 And j/2 is not more than, if yes, P10 is executed, otherwise P12 is executed;
p10, k of calculation result will not be returned by the vehicle-mounted main controller II 2 The target identification calculation tasks of the vehicle-mounted auxiliary controllers II are sequentially issued to k of the first returned calculation results according to the sequence of the returned calculation results 2 The vehicle-mounted auxiliary controller II executes P11;
p11, first return k of calculation result 2 The personal vehicle-mounted auxiliary controller II carries out calculation according to the received target identification calculation task, and the calculation result is calculated after the calculation is completedReturning to the vehicle-mounted main controller II, and then executing P14;
p12, k of calculation result will not be returned by the vehicle-mounted main controller II 2 Target recognition calculation task combination of the vehicle-mounted auxiliary controller II is divided into j-k on average 2 The parts are then processed to j-k according to the sequence of the returned calculation results 2 The image processing calculation tasks are sequentially issued to j-k returning calculation results 2 The vehicle-mounted auxiliary controller II executes P13;
p13, j-k of the returned calculation result 2 The personal vehicle-mounted auxiliary controller II performs calculation according to the received target identification calculation task, returns a calculation result to the vehicle-mounted main controller II after the calculation is completed, and then executes P14;
p14, the vehicle-mounted main controller II carries out combination and arrangement on j calculation results to generate a target identification result, and then the process is finished.
The plurality of vehicle-mounted cameras installed on the intelligent network-connected automobile can be converted into new images in a unified format (namely, the resolution is R, the focal length is F, the preset color depth is C, the preset sensitivity is ISO and the preset exposure value is EV) through the image processing mode, and the new images are used for subsequent target identification.
After the data conversion failure, the original image and the data conversion failure information are uploaded to the cloud platform 4. And an engineer can analyze the reasons of the data conversion failure on the cloud platform and then carry out vehicle-end OTA upgrading.
The new image and the target identification result are uploaded to the cloud platform 4, and the data processing system located on the cloud platform comprises a control module and a communication module, wherein the communication module is used for communicating with a vehicle end, and the control module performs data operation. The cloud platform 4 can utilize the new image and the target recognition result to perform data calibration and data training, and a new model is integrated after the data training, and the specific process is as follows:
1) The control module of the data processing system positioned on the cloud platform extracts and forms initial image data according to the received image data, and performs data annotation;
2) The marked image data is trained by using a convolutional neural network model as a machine learning model;
3) Performing feature learning on the image by using a convolutional neural network, classifying the learned features, and completing target classification to obtain a deep learning model;
4) The deep learning model obtained by classification forwards the image data to the communication module through the control module, the communication module sends the image data to the vehicle end, and the optimized data model is beneficial to improving the target recognition precision.
Claims (4)
1. An image processing and target recognition method based on a vehicle-mounted camera adopts a system which comprises the vehicle-mounted camera (1), an image processing module (2) and a target recognition module (3), wherein the vehicle-mounted camera (1) acquires an original image and sends the original image to the image processing module (2); the method is characterized in that: the image processing module (2) receives the original image and then performs the following processing:
extracting resolution, focal length, color depth, photosensitivity and exposure value of an original image;
replacing the resolution of the original image with a preset resolution, replacing the focal length of the original image with a preset focal length, replacing the color depth of the original image with a preset color depth, replacing the light sensitivity of the original image with a preset light sensitivity, replacing the exposure value of the original image with a preset exposure value, and generating a new image according to the preset resolution, the preset focal length, the preset color depth, the preset light sensitivity and the preset exposure value; the preset resolution, the preset focal length, the preset color depth, the preset sensitivity and the preset exposure value are stored standard camera parameter values;
judging whether the new image is successfully generated, if so, judging that the data conversion is successful, and sending the new image to a target identification module (3), otherwise, judging that the data conversion is failed;
the target recognition module (3) carries out target recognition after receiving the new image and stores a target recognition result;
the image processing module (2) consists of a vehicle-mounted main controller I and i vehicle-mounted auxiliary controllers I;
if the calculation power of the vehicle-mounted main controller I is enough, the vehicle-mounted main controller I executes image processing to generate a new image;
if the calculation force of the vehicle-mounted main controller I is insufficient, the vehicle-mounted main controller I sends an image processing calculation task request to the i vehicle-mounted auxiliary controllers I, the i vehicle-mounted auxiliary controllers I feed respective calculation force values back to the vehicle-mounted main controller I after receiving the image processing calculation task request, the vehicle-mounted main controller I sorts the corresponding i vehicle-mounted auxiliary controllers I according to the magnitude of the calculation force values after receiving the i calculation force values, the vehicle-mounted main controller I equally divides the image processing calculation task into i parts, and sequentially sends the i parts of image processing calculation tasks to the corresponding i vehicle-mounted auxiliary controllers I according to the order of the calculation force values from large to small, the i vehicle-mounted auxiliary controllers I calculate according to the received image processing calculation task, and the calculation result is returned to the vehicle-mounted main controller I after calculation is completed;
if the vehicle-mounted main controller I receives the calculation results returned by the i vehicle-mounted auxiliary controllers I within the preset time, the vehicle-mounted main controller I carries out combination and arrangement on the calculation results returned by the i vehicle-mounted auxiliary controllers I to generate a new image;
if the vehicle-mounted main controller I only receives i-k in the preset time 1 Calculation result returned by the vehicle-mounted auxiliary controller I, and k 1 If i/2 is not higher than the preset value, the vehicle-mounted main controller I does not return k of the calculation result 1 The image processing calculation task of the vehicle-mounted auxiliary controller I is issued to k which returns calculation results first 1 The vehicle-mounted auxiliary controller I firstly returns k of calculation results 1 The method comprises the steps that a personal vehicle-mounted auxiliary controller I carries out calculation according to a received image processing calculation task, after calculation is completed, a calculation result is returned to a vehicle-mounted main controller I, and the vehicle-mounted main controller I carries out combination and arrangement on i calculation results to generate a new image;
if the vehicle-mounted main controller I only receives i-k in the preset time 1 Calculation result returned by the vehicle-mounted auxiliary controller I, and k 1 >i/2, the vehicle-mounted main controller I will not return k of the calculation result 1 Image processing calculation task combination of the vehicle-mounted auxiliary controller I is divided into i-k on average 1 Parts, then i-k 1 Under the task of image processing and calculationI-k to return calculation result 1 The vehicle-mounted auxiliary controller I returns the i-k of the calculation result 1 And the vehicle-mounted auxiliary controller I carries out calculation according to the received image processing calculation task, the calculation result is returned to the vehicle-mounted main controller I after the calculation is completed, and the vehicle-mounted main controller I carries out combination and arrangement on the i calculation results to generate a new image.
2. The vehicle-mounted camera-based image processing and target recognition method according to claim 1, wherein: after the image processing module (2) judges that the data conversion fails, uploading the original image and the data conversion failure information to the cloud platform (4); after the target recognition module (3) stores the target recognition result, the target recognition result is displayed through the display screen (5), and the new image and the target recognition result are uploaded to the cloud platform (4).
3. The vehicle-mounted camera-based image processing and target recognition method according to claim 1 or 2, wherein: the target identification module (3) consists of a vehicle-mounted main controller II and j vehicle-mounted auxiliary controllers II;
if the calculation power of the vehicle-mounted main controller II is enough, the vehicle-mounted main controller II executes target recognition to generate a target recognition result;
if the calculation power of the vehicle-mounted main controller II is insufficient, the vehicle-mounted main controller II sends a target recognition calculation task request to j vehicle-mounted auxiliary controllers II, j vehicle-mounted auxiliary controllers II receive the target recognition calculation task request and feed respective calculation power values back to the vehicle-mounted main controller II, the vehicle-mounted main controller II sequences the corresponding j vehicle-mounted auxiliary controllers II according to the calculation power values after receiving the j calculation power values, the vehicle-mounted main controller II averagely divides the target recognition calculation task into j parts, and sequentially sends the j target recognition calculation tasks to the corresponding j vehicle-mounted auxiliary controllers II according to the order of the calculation power values from large to small, the j vehicle-mounted auxiliary controllers II calculate according to the received target recognition calculation tasks, the calculation results are returned to the vehicle-mounted main controller II after the calculation is completed, and the j calculation results are combined and arranged after the vehicle-mounted main controller II receives the j calculation results, so that the target recognition results are generated.
4. The vehicle-mounted camera-based image processing and target recognition method according to claim 3, wherein:
if the vehicle-mounted main controller II receives calculation results returned by the j vehicle-mounted auxiliary controllers II within a preset time, the vehicle-mounted main controller II carries out combination and arrangement on the calculation results returned by the j vehicle-mounted auxiliary controllers II to generate a target identification result;
if the vehicle-mounted main controller II only receives j-k in preset time 2 Calculating result returned by the vehicle-mounted auxiliary controller II, and k 2 J/2, the vehicle-mounted main controller II will not return k of the calculation result 2 K for issuing target identification calculation task of vehicle-mounted auxiliary controller II to first return calculation result 2 The vehicle-mounted auxiliary controller II firstly returns k of the calculation result 2 The individual vehicle-mounted auxiliary controller II performs calculation according to the received target recognition calculation task, the calculation result is returned to the vehicle-mounted main controller II after the calculation is completed, and the vehicle-mounted main controller II performs combination and arrangement on j calculation results to generate a target recognition result;
if the vehicle-mounted main controller II only receives j-k in preset time 2 Calculating result returned by the vehicle-mounted auxiliary controller II, and k 2 >j/2, the vehicle-mounted main controller II will not return k of the calculation result 2 Target recognition calculation task combination of the vehicle-mounted auxiliary controller II is divided into j-k on average 2 Portion, then j-k 2 Issuing the image processing calculation task to j-k returning calculation result 2 The vehicle-mounted auxiliary controller II returns j-k of the calculation result 2 And the vehicle-mounted auxiliary controller II calculates according to the received target recognition calculation task, returns the calculation result to the vehicle-mounted main controller II after the calculation is completed, and combines and sorts j calculation results by the vehicle-mounted main controller II to generate a target recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110351792.7A CN113077378B (en) | 2021-03-31 | 2021-03-31 | Image processing and target identification method based on vehicle-mounted camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110351792.7A CN113077378B (en) | 2021-03-31 | 2021-03-31 | Image processing and target identification method based on vehicle-mounted camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113077378A CN113077378A (en) | 2021-07-06 |
CN113077378B true CN113077378B (en) | 2024-02-09 |
Family
ID=76614536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110351792.7A Active CN113077378B (en) | 2021-03-31 | 2021-03-31 | Image processing and target identification method based on vehicle-mounted camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077378B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106060402A (en) * | 2016-07-06 | 2016-10-26 | 北京奇虎科技有限公司 | Image data processing method and device, and mobile terminal |
CN106063240A (en) * | 2013-11-14 | 2016-10-26 | 微软技术许可有限责任公司 | Image processing for productivity applications |
KR101828015B1 (en) * | 2016-09-30 | 2018-02-13 | 부산대학교 산학협력단 | Auto Exposure Control System and Method for Object Detecting based on On board camera |
WO2018171493A1 (en) * | 2017-03-21 | 2018-09-27 | 腾讯科技(深圳)有限公司 | Image processing method and device, and storage medium |
CN110166707A (en) * | 2019-06-13 | 2019-08-23 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110166706A (en) * | 2019-06-13 | 2019-08-23 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111010532A (en) * | 2019-11-04 | 2020-04-14 | 武汉理工大学 | Vehicle-mounted machine vision system based on multi-focal-length camera group and implementation method |
CN111476066A (en) * | 2019-01-23 | 2020-07-31 | 北京奇虎科技有限公司 | Image effect processing method and device, computer equipment and storage medium |
CN111866407A (en) * | 2020-07-30 | 2020-10-30 | 深圳市阿达视高新技术有限公司 | Image processing method and device based on motion digital camera |
CN111953848A (en) * | 2020-08-19 | 2020-11-17 | Oppo广东移动通信有限公司 | System, method and related device for realizing application function through context awareness |
CN112328402A (en) * | 2020-11-25 | 2021-02-05 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | High-efficiency self-adaptive space-based computing platform architecture and implementation method thereof |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5712925B2 (en) * | 2009-09-28 | 2015-05-07 | 日本電気株式会社 | Image conversion parameter calculation apparatus, image conversion parameter calculation method, and program |
-
2021
- 2021-03-31 CN CN202110351792.7A patent/CN113077378B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106063240A (en) * | 2013-11-14 | 2016-10-26 | 微软技术许可有限责任公司 | Image processing for productivity applications |
CN106060402A (en) * | 2016-07-06 | 2016-10-26 | 北京奇虎科技有限公司 | Image data processing method and device, and mobile terminal |
KR101828015B1 (en) * | 2016-09-30 | 2018-02-13 | 부산대학교 산학협력단 | Auto Exposure Control System and Method for Object Detecting based on On board camera |
WO2018171493A1 (en) * | 2017-03-21 | 2018-09-27 | 腾讯科技(深圳)有限公司 | Image processing method and device, and storage medium |
CN111476066A (en) * | 2019-01-23 | 2020-07-31 | 北京奇虎科技有限公司 | Image effect processing method and device, computer equipment and storage medium |
CN110166707A (en) * | 2019-06-13 | 2019-08-23 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
CN110166706A (en) * | 2019-06-13 | 2019-08-23 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111010532A (en) * | 2019-11-04 | 2020-04-14 | 武汉理工大学 | Vehicle-mounted machine vision system based on multi-focal-length camera group and implementation method |
CN111866407A (en) * | 2020-07-30 | 2020-10-30 | 深圳市阿达视高新技术有限公司 | Image processing method and device based on motion digital camera |
CN111953848A (en) * | 2020-08-19 | 2020-11-17 | Oppo广东移动通信有限公司 | System, method and related device for realizing application function through context awareness |
CN112328402A (en) * | 2020-11-25 | 2021-02-05 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | High-efficiency self-adaptive space-based computing platform architecture and implementation method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN113077378A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117709B (en) | Collision avoidance system for autonomous vehicles | |
US10929715B2 (en) | Semantic segmentation using driver attention information | |
CN107886043B (en) | Vision-aware anti-collision early warning system and method for forward-looking vehicles and pedestrians of automobile | |
CN114418895A (en) | Driving assistance method and device, vehicle-mounted device and storage medium | |
US11250279B2 (en) | Generative adversarial network models for small roadway object detection | |
CN111386563A (en) | Teacher data collection device | |
CN114043989A (en) | Recursive graph and convolutional neural network-based driving style recognition model, lane change decision model and decision method | |
CN112009491B (en) | Deep learning automatic driving method and system based on traffic element visual enhancement | |
CN114266993A (en) | Image-based road environment detection method and device | |
CN113077378B (en) | Image processing and target identification method based on vehicle-mounted camera | |
US20210224554A1 (en) | Image processing apparatus, vehicle, control method for information processing apparatus, storage medium, information processing server, and information processing method for recognizing a target within a captured image | |
CN114008698A (en) | External environment recognition device | |
CN115100251A (en) | Thermal imager and laser radar-based vehicle front pedestrian detection method and terminal | |
WO2020250527A1 (en) | Outside environment recognition device | |
CN109733347B (en) | Man-machine coupled longitudinal collision avoidance control method | |
CN113370991A (en) | Driving assistance method, device, equipment, storage medium and computer program product | |
CN114463710A (en) | Vehicle unmanned driving strategy generation method, device, equipment and storage medium | |
CN109910891A (en) | Control method for vehicle and device | |
WO2024018909A1 (en) | State estimation device, state estimation method, and state estimation program | |
CN115953765B (en) | Obstacle recognition method for automatic driving of vehicle | |
CN111837125A (en) | Method for providing a set of training data sets, method for training a classifier, method for controlling a vehicle, computer-readable storage medium and vehicle | |
CN202694585U (en) | Vehicle-mounted visual traffic signal lamp intelligent detection apparatus | |
CN115782835A (en) | Automatic parking remote driving control method for passenger boarding vehicle | |
CN117612140A (en) | Road scene identification method and device, storage medium and electronic equipment | |
Zhou et al. | Intelligent Driving Assistance System for New Energy Vehicles Based on Intelligent Algorithms and ComputerVision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |