CN111491093B - Method and device for adjusting field angle of camera - Google Patents

Method and device for adjusting field angle of camera Download PDF

Info

Publication number
CN111491093B
CN111491093B CN201910076012.5A CN201910076012A CN111491093B CN 111491093 B CN111491093 B CN 111491093B CN 201910076012 A CN201910076012 A CN 201910076012A CN 111491093 B CN111491093 B CN 111491093B
Authority
CN
China
Prior art keywords
vehicle
camera
current camera
field angle
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910076012.5A
Other languages
Chinese (zh)
Other versions
CN111491093A (en
Inventor
李亚
费晓天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Momenta Suzhou Technology Co Ltd
Original Assignee
Momenta Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Momenta Suzhou Technology Co Ltd filed Critical Momenta Suzhou Technology Co Ltd
Priority to CN201910076012.5A priority Critical patent/CN111491093B/en
Publication of CN111491093A publication Critical patent/CN111491093A/en
Application granted granted Critical
Publication of CN111491093B publication Critical patent/CN111491093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method and a device for adjusting the field angle of a camera, wherein the method comprises the following steps: acquiring a road image which is acquired by a current camera and contains a target vehicle; splitting the road image according to different planes of the target vehicle to obtain local detection frames of the planes of the target vehicle in the split image; and for any local detection frame, determining the proportion value of the local detection frame in the split image, and determining the adjustment mode of the current camera field angle according to the proportion value. By adopting the technical scheme, the field angle of the camera which is working at present can be adjusted in real time according to the image observed in real time so as to capture enough surrounding environment information.

Description

Method and device for adjusting field angle of camera
Technical Field
The invention relates to the technical field of automatic driving, in particular to a method and a device for adjusting the field angle of a camera.
Background
With the development of scientific technology, emerging technologies such as 'automatic driving' and 'unmanned vehicle' gradually enter the visual field of people. The technical problems that high-precision vehicle detection and identification, transient road condition judgment and proper speed and direction adjustment are required to be overcome in the field of unmanned vehicles are solved. The processing of vehicle information traveling on a road is often more complicated than the analysis processing of road information such as lane lines, stationary obstacles, traffic lights, and traffic signs and pedestrian information in road areas. The information of the speed, distance, steering and the like of surrounding vehicles has the characteristics of instantaneity, variability, unpredictability and the like, so that the surrounding environment needs to be monitored in real time by means of a camera lens, and the surrounding conditions are analyzed and processed by utilizing various computer vision algorithms.
Specifically, the vehicle uses an image sensor such as a camera to acquire a visual image of the current environment of the vehicle, and then uses a vehicle detection technology and a computer vision algorithm to process image information of the environment near the vehicle acquired by the sensor, so as to obtain the position and the posture of the vehicle in the scene. The environment image acquired by the image sensor is a multi-frame image with a short time interval; then, tracking the moving vehicle by a moving target tracking technology; and finally, calculating the distance between the target vehicle and the current vehicle and the movement speed of the target vehicle by a certain method based on the vehicle movement information obtained by tracking the moving target, thereby guiding the driving of the vehicle.
The traditional car following method of the automatic driving vision part is carried out by adopting a camera with a fixed focal length, and the method generally has the following problems: if the distance between the current vehicle and the target vehicle is close or the volume of the target vehicle is large, if the field angle of the camera is small, the vehicle occupies most of images, and at the moment, the camera cannot capture enough surrounding environment information for judgment, so that an accurate and safe driving strategy cannot be given; when the camera has a large field angle, the target vehicle is small in the image, and the detection and tracking accuracy in the vision algorithm is reduced.
Disclosure of Invention
The embodiment of the invention discloses a method and a device for adjusting the field angle of a camera, which can adjust the field angle of the camera which is working at present in real time according to a real-time observed image so as to capture enough surrounding environment information.
In a first aspect, an embodiment of the present invention discloses a method for adjusting a field angle of a camera, where the method includes:
acquiring a road image which is acquired by a current camera and contains a target vehicle;
splitting the road image according to different faces of the target vehicle to obtain local detection frames of all the faces of the target vehicle in the split image;
and for any local detection frame, determining the proportion value of the local detection frame in the split image, and determining the adjustment mode of the current camera field angle according to the proportion value.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, splitting the road image according to different faces of the target vehicle to obtain local detection frames of the faces of the target vehicle in the split image includes:
identifying the road image based on a preset vehicle target detection model to obtain an integral detection frame of the target vehicle;
extracting the target vehicle according to the overall detection frame, and splitting the road image according to different faces of the target vehicle by taking a characteristic line of the target vehicle as a base line based on a vehicle face detection segmentation model to obtain a local detection frame of each face of the target vehicle in the split image;
wherein the characteristic lines of the target vehicle include a length, a width, and a height of the vehicle.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining, according to the ratio value, an adjustment manner of a current field angle of the camera includes:
if the ratio value is larger than a preset first ratio threshold, increasing the field angle of the current camera; alternatively, the first and second electrodes may be,
if the proportion value is smaller than a preset second proportion threshold value, reducing the field angle of the current camera;
wherein the preset second ratio threshold is smaller than the preset first ratio threshold.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the increasing the field angle of the current camera includes:
if the focal length of the current camera is adjustable, reducing the focal length of the current camera to increase the field angle of the current camera; alternatively, the first and second electrodes may be,
and if the focal length of the current camera is not adjustable, selecting a target camera with the focal length smaller than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the reducing the field angle of the current camera includes:
if the focal length of the current camera is adjustable, increasing the focal length of the current camera to reduce the field angle of the current camera; alternatively, the first and second electrodes may be,
and if the focal length of the current camera is not adjustable, selecting a target camera with the focal length larger than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
As an optional implementation manner, in the first aspect of this embodiment of the present invention, the method further includes:
determining the relative speed and/or the relative angle of the target vehicle and the current vehicle where the current camera is located;
and determining the adjustment mode of the current camera angle of view according to the relative speed and/or the relative angle.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the preset vehicle target detection model is obtained by training a pre-established initial neural network model by using a road sample image labeled with a vehicle position.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the vehicle surface detection segmentation model is obtained by training a pre-established initial depth regression network model by using a vehicle sample image labeled with vehicle feature information.
Wherein the vehicle characteristic information includes a vehicle wheel point, a characteristic line, and an orientation.
In a second aspect, an embodiment of the present invention further provides an apparatus for adjusting a field angle of a camera, where the apparatus includes:
the road image acquisition module is used for acquiring a road image which is acquired by a current camera and contains a target vehicle;
the local detection frame determining module is used for splitting the road image according to different faces of the target vehicle to obtain local detection frames of the faces of the target vehicle in the split image;
and the field angle adjusting module is used for determining the proportion value of any local detection frame in the split image and determining the adjusting mode of the field angle of the current camera according to the proportion value.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the local detection frame determining module is specifically configured to:
identifying the road image based on a preset vehicle target detection model to obtain an integral detection frame of the target vehicle;
extracting the target vehicle according to the overall detection frame, and splitting the road image according to different faces of the target vehicle by taking a characteristic line of the target vehicle as a base line based on a vehicle face detection segmentation model to obtain a local detection frame of each face of the target vehicle in the split image;
wherein the characteristic lines of the target vehicle include a length, a width, and a height of the vehicle.
As an optional implementation manner, in a second aspect of the embodiment of the present invention, the viewing angle adjusting module includes:
the field angle increasing unit is used for increasing the field angle of the current camera if the proportional value is larger than a preset first proportional threshold; alternatively, the first and second electrodes may be,
the field angle reducing unit is used for reducing the field angle of the current camera if the proportion value is smaller than a preset second proportion threshold;
wherein the preset second ratio threshold is smaller than the preset first ratio threshold.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the viewing angle increasing unit is specifically configured to:
if the focal length of the current camera is adjustable, reducing the focal length of the current camera to increase the field angle of the current camera; alternatively, the first and second electrodes may be,
and if the focal length of the current camera is not adjustable, selecting a target camera with the focal length smaller than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the viewing angle reducing unit is specifically configured to:
if the focal length of the current camera is adjustable, increasing the focal length of the current camera to reduce the field angle of the current camera; alternatively, the first and second electrodes may be,
and if the focal length of the current camera is not adjustable, selecting a target camera with the focal length larger than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
As an alternative implementation, in a second aspect of the embodiment of the present invention, the apparatus further includes:
the relative speed and relative angle determining module is used for determining the relative speed and/or relative angle between the target vehicle and the current vehicle where the current camera is located;
and the adjusting mode determining module is used for determining the adjusting mode of the current camera view angle according to the relative speed and/or the relative angle.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the preset vehicle target detection model is obtained by training a pre-established initial neural network model by using a road sample image labeled with a vehicle position.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the vehicle surface detection segmentation model is obtained by training a pre-established initial depth regression network model by using a vehicle sample image labeled with vehicle feature information; wherein the vehicle characteristic information includes a vehicle wheel point, a characteristic line, and an orientation.
In a third aspect, an embodiment of the present invention further provides a vehicle-mounted terminal, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program codes stored in the memory to execute part or all of the steps of adjusting the visual angle of the camera provided by any embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, which stores a computer program, where the computer program includes instructions for executing part or all of the steps of the adjustment of the camera angle provided in any embodiment of the present invention.
In a fifth aspect, the embodiments of the present invention further provide a computer program product, which when run on a computer, causes the computer to perform part or all of the steps of adjusting the angle of view of the camera provided in any embodiment of the present invention.
According to the technical scheme provided by the embodiment, after the road image which is acquired by the current camera and contains the target vehicle is acquired, the road image is split according to different faces of the target vehicle, and the local detection frame of each face of the target vehicle in the split image can be acquired. For any local detection frame, the adjustment mode of the current camera angle can be determined according to the proportion value of the local detection frame in the split image, so that the camera angle can be correspondingly adjusted according to the change of the observation image, and enough surrounding environment information can be captured for judging the subsequent driving strategy.
The invention comprises the following steps:
1. the invention is one of the inventions of determining the proportion value of the local detection frame obtained after splitting the road image in the split image, thereby determining the adjustment mode of the current camera angle according to the proportion value, and correspondingly adjusting the angle of view according to the change of the real-time observation image.
2. When adjusting the angle of view of the camera, a mode of switching a plurality of cameras with fixed focal lengths is adopted. If the field angle of the current camera is too large (or too small), the camera with the field angle smaller (or larger) than the current field angle is switched to be used as the working camera, so that the switched camera can capture all interested areas in the driving process of the vehicle, and the problem that in the prior art, the estimation of multi-camera shooting objects is independent, and the historical information acquired by other cameras is not fully utilized is solved.
3. The visual angle of the camera is adjusted by adjusting the focal length of the camera, so that the self-adaptive adjustment of the visual angle of the camera is realized by adopting one automatic zooming camera without simultaneously working a plurality of cameras, the system load is reduced, and the difficulty of information processing is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for adjusting a field angle of a camera according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for adjusting a field angle of a camera according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for adjusting a field angle of a camera according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for training a vehicle target detection model according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for training a vehicle segmentation model according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for adjusting a field angle of a camera according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for adjusting a field angle of a camera according to an embodiment of the present invention. The method is applied to automatic driving, can be executed by a camera view angle adjusting device, can be realized in a software and/or hardware mode, and can be generally integrated in vehicle-mounted terminals such as a vehicle-mounted Computer, a vehicle-mounted Industrial control Computer (IPC), and the like, and the embodiment of the invention is not limited. As shown in fig. 1, the method for adjusting the field angle of a camera provided in this embodiment specifically includes:
110. and acquiring a road image which is acquired by the current camera and contains the target vehicle.
The target vehicle is a vehicle that may affect the current vehicle driving strategy, and for example, a preceding vehicle, a following vehicle, or a side vehicle that runs in the same direction or opposite to the current vehicle. The distance between the target vehicle and the current vehicle is generally within a preset distance range.
In general, the road image collected by the camera does not include exactly one of the front, rear, left or right four faces of the target vehicle, but often includes two or more faces, for example, for a vehicle in front of the current vehicle and running in the same direction as the current vehicle, the road image typically includes the rear face of the vehicle in front and one of the left and right side faces; in the case of a rear vehicle traveling behind the current vehicle in the same direction as the current vehicle, the road image generally includes the front of the rear vehicle and one of the left and right side surfaces.
120. And splitting the road image according to different faces of the target vehicle to obtain local detection frames of all the faces of the target vehicle in the split image.
Wherein the different sides of the subject vehicle may include, but are not limited to, front, rear, left side, right side, and the like. The various faces of the target vehicle may be divided by identifying the vehicle's ridges, i.e., the connecting lines between the different faces. When the road image is split according to different faces of the target vehicle, the ridge line can be used as a base line to split the road image. The split road image generally only contains one surface of the vehicle, and the position of the surface in the split road image can be determined through the local detection frame.
Illustratively, the local detection box is determined by coordinates of an upper left point and a lower right point of a pixel range occupied by a certain face of the target vehicle in the split image. A rectangular area drawn by these two coordinates can be used as a local detection frame. The local detection frame includes all the pixel points on the side of the target vehicle.
130. And for any local detection frame, determining the proportion value of the local detection frame in the split image, and determining the adjustment mode of the current camera field angle according to the proportion value.
For example, the proportion value of the local detection frame to the split image may be determined by one of the following three ways, or by a combination of the three ways:
(1) the ratio value can be determined by taking the width of the local detection frame and the width of the split image as a quotient; (2) the proportion value can also be determined by taking the quotient of the height of the local detection frame and the height of the split image; (3) the ratio value can also be determined by taking the quotient of the area of the local detection frame and the area of the split image. The accuracy of the calculation result obtained by combining the three modes is higher than that of the calculation result determined by only adopting one mode.
In this embodiment, the size of the area occupied by a certain face of the vehicle in the split image to which the local detection frame belongs can be described by the ratio of the local detection frame to the split image to which the local detection frame belongs, so that whether the field angle of the current camera is appropriate can also be described. Therefore, after the proportion value of the local detection frame to the split image is determined, the size of the field angle of the camera can be adjusted according to the proportion value.
For example, if the ratio value is greater than the preset first ratio threshold, it indicates that the field angle of the current camera is small, and the target vehicle occupies most of the image, and at this time, the camera cannot capture enough surrounding environment information, so that an accurate and safe driving strategy cannot be provided according to the information captured by the camera. If the ratio value is smaller than the preset second ratio threshold, the fact that the field angle of the current camera is large and the target in front of the target vehicle is small in the image is indicated, and the detection and tracking accuracy in the visual algorithm is reduced at the moment. In this case, in order to acquire a real-time scene with a larger field of view, the field angle of the camera needs to be reduced.
In practical application, when determining the proportion of the local detection frame to the split image, one of the faces of the split vehicle may be selected for determination in consideration of efficiency, cost, and other factors. However, in order to further provide the accuracy of the determination, a plurality of planes may be selected to be determined comprehensively, and the embodiment of the present invention is not limited in this respect. In the comprehensive judgment, different proportional thresholds can be set for the local detection frames corresponding to each vehicle surface, and the processing is divided.
In the present embodiment, after the adjustment method of the angle of view of the camera is determined, the angle of view of the camera can be adjusted in the following two ways.
As an alternative embodiment, two or more cameras with different viewing angles and fixed focal lengths may be loaded on the current vehicle in advance, and the adjustment of the viewing angle of the current camera may be realized by an indirect adjustment mode of switching the currently operating camera. Specifically, when the determination result shows that it is necessary to increase the angle of view of the camera, the angle of view may be switched by switching the current camera to another camera whose angle of view is larger than the current angle of view. When the determination result shows that the camera angle of view needs to be reduced, the switching of the angle of view may be performed by switching the current camera to another camera smaller than the current angle of view.
As an alternative, a focus-adjustable camera can be mounted on the current vehicle. When the adjusting mode of the current camera field angle needs to be determined, the focal length of the current camera can be directly adjusted through acquisition, namely, the adaptive focal length change is carried out according to the calculated proportional value. Specifically, the longer the lens focal length of the camera is, the smaller the angle of view of the lens is, so when the determination result shows that the angle of view of the camera needs to be increased, the determination result can be realized by decreasing the focal length of the camera; when the judgment result shows that the camera view angle needs to be reduced, the camera view angle can be increased by increasing the camera focal length.
Furthermore, in this embodiment, no matter which manner is adopted, after the angle of view of the camera is adjusted, whether the currently selected camera or the focal length of the current camera is appropriate can be verified according to the proportion of the local detection frame obtained after the angle of view is adjusted to the split image, if the local detection frame is not appropriate, for example, if the proportion of the local detection frame still exceeds a preset proportion threshold, the angle of view of the camera needs to be continuously adjusted by any manner until the proportion value of the local detection frame of the vehicle to the split image meets the preset proportion threshold. For example, if a vehicle is loaded with cameras of different perspectives, the camera perspectives are divided into 4 categories: 0-30 degree angle of view, 30-50 degree angle of view, 50-100 degree angle of view, and angle of view of 100 degrees or more. When the field angle of the camera needs to be increased, the camera can be switched in a gradual increasing mode according to the different field angles until the proportion value of the local detection frame of the vehicle to the split image meets the requirement of a preset proportion threshold value. Or the field angle of the current camera can be gradually increased by gradually reducing the focal length of the camera according to the set step length until the proportion value of the local detection frame of the vehicle in the split image meets the requirement of a preset proportion threshold value.
For example, in the actual processing process, the camera switching may also be performed by using a method of discontinuously jumping the cameras, for example, directly switching from a camera with a viewing angle of 0-30 ° to a camera with a viewing angle of more than 100 ° (or directly adjusting the focal length of the current camera to the minimum). When the distance between the front vehicle and the rear vehicle is too short due to the fact that the driving speed is fast or the front vehicle decelerates suddenly, brakes and the like, if the proportion of the local detection frame of the vehicle is judged to greatly exceed the highest threshold value at the moment, the camera with the angle of more than 100 degrees can be directly switched to. The specific setting and determination method of the highest threshold may be determined according to the result of the simulation experiment.
According to the technical scheme provided by the embodiment, after the road image which is acquired by the current camera and contains the target vehicle is acquired, the road image is split according to different faces of the target vehicle, and the local detection frame of each face of the target vehicle in the split image can be acquired. For any local detection frame, the adjustment mode of the current camera angle can be determined according to the proportion value of the local detection frame in the split image, so that the camera angle can be correspondingly adjusted according to the change of the observation image, and enough surrounding environment information can be captured for judging the subsequent driving strategy.
Example two
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for adjusting a field angle of a camera according to an embodiment of the present invention. The present embodiment is optimized based on the above embodiments, and as shown in fig. 2, the method includes:
210. and acquiring a road image which is acquired by the current camera and contains the target vehicle.
220. And splitting the road image according to different faces of the target vehicle to obtain local detection frames of all the faces of the target vehicle in the split image.
230. And for any local detection frame, determining the proportion value of the local detection frame in the split image, and if the proportion value is greater than a preset first proportion threshold, increasing the field angle of the current camera.
For example, increasing the field angle of the current camera can be achieved in any one of the following two ways:
(1) specifically, as the longer the focal length of the lens of the camera is, the smaller the angle of view of the lens is, if the focal length of the current camera is adjustable, the angle of view of the current camera can be increased by decreasing the focal length of the current camera. Thus, the real-time adjustment of the field angle according to the real-time observation image can be realized by using one camera. The advantage of setting up like this does not need a plurality of cameras to work simultaneously, has reduced system's load, has reduced information processing's the degree of difficulty.
(2) The method for indirectly adjusting the field angle of the current camera includes, specifically, if the focal length of the current camera is not adjustable, switching the current camera to a target camera with a focal length smaller than the focal length of the current camera as a working camera, so as to adjust the field angle of the current camera to the field angle corresponding to the target camera. The arrangement solves the problems that the estimation of the object shot by the multiple cameras is independent and the historical information acquired by other cameras is not fully utilized.
240. And for any local detection frame, determining the proportion value of the local detection frame in the split image, and if the proportion is greater than a second threshold value in the preset proportion threshold value, reducing the field angle of the current camera.
For example, reducing the field angle of the current camera can be achieved in any one of the following two ways:
(1) specifically, if the focal length of the current camera is adjustable, the focal length of the current camera is increased in such a way of direct adjustment to reduce the field angle of the current camera.
(2) The method for indirectly adjusting the field angle of the current camera includes specifically, if the focal length of the current camera is not adjustable, selecting a target camera with a focal length smaller than that of the current camera as a working camera, so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
In the case of the method (2) in the above steps 230 and 240, when the field angle range of each camera having a fixed focal length is known, it is not necessary to select an appropriate camera directly according to the field angle adjustment requirement without using the focal length.
It should be further noted that the above step 230 and step 240 are two parallel embodiments, and there is no order of execution between them. In the process of practical application, the corresponding adjusting mode of the current camera angle of view can be selected according to the calculation result of the proportion value to carry out real-time adjustment of the angle of view.
In this embodiment, the preset first ratio threshold in steps 230 and 240 is greater than the second preset ratio threshold. The two preset proportional thresholds are empirical values and can be adjusted according to the training result of the early-stage data set. The data set may be derived from a vehicle data recorder during real driving or from an image during simulated driving, and in some possible implementations, other means may be used to obtain the data set and make necessary adjustments to the threshold values based on the training effect. In this embodiment, the first preset proportion threshold may be set to 75%, and the second preset proportion threshold may be set to 5%.
In the practical application process, in order to improve the accuracy of the judgment result, the local detection frames corresponding to the plurality of vehicle splitting screens can be selected to comprehensively judge the proportional value. In this case, each vehicle separation plane (or local detection box) may set a high threshold and a low threshold. In this embodiment, the preset first ratio threshold may be used as a high threshold, and the preset second ratio threshold may be used as a low threshold.
Further, on the basis of the above embodiment, the adjustment mode of the field angle of the current camera can be determined according to the relative speed and/or the relative angle by determining the relative speed and/or the relative angle of the target vehicle and the current vehicle where the current camera is located.
Wherein the relative speed of the target vehicle and the current vehicle may be determined by determining the speed of the current vehicle and the speed of the target vehicle. Wherein the current vehicle speed may be determined from wheel speed sensor data of the vehicle. The speed of the target vehicle can be determined by a vehicle speed detection method such as an optical flow method or radar speed measurement. In this embodiment, the relative angle between the target vehicle and the current vehicle is a relative heading angle between the target vehicle and the current vehicle, and the relative angle may be determined according to the position of the current vehicle and by combining the position of the target vehicle in the road image.
In this embodiment, the determining of the relative speed and/or the relative angle between the target vehicle and the current vehicle is to further determine the distance between the target vehicle and the current vehicle, and on the basis of the determination of the proportional value of the local detection frame in the split image, the adjusting mode of determining the angle of view of the current camera is adjusted according to the distance between the target vehicle and the current vehicle and the proportional value, so that the accuracy of the result of adjusting the angle of view of the camera is higher.
It should be noted that, by determining the relative speed and/or the relative angle between the target vehicle and the current vehicle, special conditions in terms of some ratio values may also be handled, for example, if at some time, if there is a case where the ratio of the height (or the width) of the local detection frame on the side surface of the vehicle in a certain split image exceeds the high threshold value of the side surface, but the ratio of the height (or the width) of the local detection frame behind the vehicle in the split image is lower than the high threshold value of the side surface, then it is necessary to adjust the angle of view by combining multiple factors such as the relative speed and/or the relative angle between the target vehicle and the current vehicle, and comprehensively determine whether a camera needs to be switched to a large angle of view.
Specifically, if the ratio of the local detection frame on the side of the vehicle to the split image is greater than the preset first ratio threshold, and the ratio of the local detection frame on the rear of the vehicle to the split image is smaller than the preset first ratio threshold, two vehicles may be driven side by side in the same direction and at a too close distance. At this time, it may be determined whether the distance between the two vehicles is smaller than a set threshold, for example, 2 meters, by combining the relative positions of the heads of the two vehicles, and if the distance is smaller than the set threshold, the camera with a larger viewing angle is switched to. Alternatively, if it is determined that the relative speed of the target vehicle and the current vehicle is greater than a set speed threshold, for example, 40km/h, it is determined that the target vehicle is relatively far away from the current vehicle, and at this time, it is necessary to switch to a camera with a smaller angle of view to observe a wider road area.
On the basis of the above embodiments, the present embodiment provides a corresponding viewing angle adjustment manner for a fixed focal length camera and a non-fixed focal length camera, so that the viewing angle can be adjusted according to the change of the observation image, and thus, enough surrounding environment information can be captured for the judgment of the subsequent driving strategy. In addition, the embodiment also provides a scheme for determining the adjustment mode of the current camera view angle by determining the relative speed and/or the relative angle of the target vehicle and the current vehicle, so that the accuracy of the adjustment result of the camera view angle is higher.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic flow chart of a method for adjusting a camera view angle according to an embodiment of the present invention, which is optimized based on the above embodiment and introduces a specific splitting manner for splitting road images according to different road surfaces of a target vehicle. As shown in fig. 3, the method provided in this embodiment specifically includes:
310. and acquiring a road image which is acquired by the current camera and contains the target vehicle.
320. And identifying the road image based on a preset vehicle target detection model to obtain an integral detection frame of the target vehicle.
The preset vehicle target detection model is obtained by training a pre-established initial neural network model by using a road sample image labeled with a vehicle position.
The vehicle position in the road sample image is marked by the coordinates of the upper left point and the lower right point of the pixel range occupied by the vehicle in the image, and a vehicle detection frame can be determined by the two coordinates. In a road image, there may be a plurality of vehicles, and the position of a vehicle can be labeled by indicating the upper left point and the lower right point of the vehicle range, and it should be noted that the rectangle drawn by the two end points should include all the pixel points of the vehicle.
In the embodiment, the road video acquired by the camera is input to the vehicle target detection model trained in advance by frames, and the position of the vehicle in the road image, namely the overall detection frame of the vehicle, can be determined based on the output result of the vehicle target detection model. For the training process of the vehicle target detection model, the contents of the fourth embodiment can be referred to.
330. And extracting the target vehicle according to the integral detection frame, and splitting the road image according to different surfaces of the target vehicle by taking the characteristic line of the target vehicle as a base line based on the vehicle surface detection segmentation model to obtain a local detection frame of each surface of the target vehicle in the split image.
The characteristic lines (ridge lines) of the target vehicle are connecting lines of different surfaces of the vehicle, including the length, width, height and the like of the vehicle.
In this embodiment, the vehicle surface detection segmentation model is obtained by training a pre-established initial depth regression network model by using a vehicle sample image labeled with vehicle feature information, and the specific training process may refer to the content provided in the fourth embodiment.
Wherein the vehicle characteristic information includes vehicle wheel points, characteristic lines, and orientation. The wheel point refers to a contact point of a wheel with the ground, and the orientation refers to the orientation of the vehicle in the forward direction (only one surface of the target vehicle is included in the road image).
In this embodiment, after the position of the target vehicle in the road image is obtained, that is, after the overall detection frame of the target vehicle is determined, the target vehicle may be captured and input to the vehicle surface detection segmentation model, so that the direction and ridge information of the target vehicle may be obtained, and then the surface of the target vehicle is divided, so as to obtain the local detection frame of each surface of the target vehicle in the split image. And for any one split image, determining the proportion value of the local detection frame of the vehicle surface in the split image, or determining by integrating a plurality of split images to determine the adjustment mode of the field angle of the camera.
340. And for any local detection frame, determining the proportion value of the local detection frame in the split image, and determining the adjustment mode of the current camera field angle according to the proportion value.
On the basis of the above embodiment, the present embodiment determines the position of the target vehicle in the road image by recognizing the road image based on the preset vehicle target detection model. And then extracting a target vehicle from the original road image according to the position, splitting the road image according to different faces of the target vehicle by taking the characteristic line of the target vehicle as a base line based on the vehicle face detection segmentation model, and splitting the image to obtain a local detection frame of each face of the target vehicle in the split image. For any local detection frame, after the proportion value of the local detection frame in the split image is determined and the adjustment mode of the current camera field angle is determined according to the proportion value, the adjusted camera can capture a wider road image.
Example four
Referring to fig. 4, fig. 4 is a flowchart of a training method for a vehicle target detection model, which is applied to the field of automatic driving according to an embodiment of the present invention, and referring to fig. 4, the method includes:
step 410: and acquiring a road sample image marked with the position of the vehicle.
Wherein, the road sample image can be regarded as a sample image for training a vehicle target detection model. In this embodiment, the training model may adopt a supervised training mode, so that the road sample image is already labeled with the vehicle position information. The vehicle position is marked in the form of the vehicle range frame, so that the model training speed can be increased, and the accuracy of model detection is improved.
The vehicle position can be marked by the coordinates of the upper left point and the lower right point of the pixel range occupied by the vehicle in the image, and a vehicle detection frame can be determined by the two coordinates. In a road image, there may be a plurality of vehicles, and the position of a vehicle can be labeled by indicating the upper left point and the lower right point of the vehicle range, and it should be noted that the rectangle drawn by the two end points should include all the pixel points of the vehicle.
In some possible implementations of embodiments of the invention, the images we process may be road images acquired by cameras located at the front or rear of the vehicle body, etc. Generally, the camera at the front of the vehicle body is used for collecting the information of the target vehicle at the front position of the current vehicle (which can run in the same direction or opposite direction to the current vehicle), and generally, the camera at the rear of the vehicle body is used for collecting the information of the target vehicle at the rear position of the current vehicle (which can run in the same direction or opposite direction to the current vehicle). In the embodiment of the invention, a sample library can be established in advance, and a sample image is obtained from the sample library. The sample library can adopt images in the public data set, and also can acquire images collected by a camera of the vehicle from storage equipment of the vehicle, and mark the position of a vehicle area in the images, namely, an integral detection frame of the vehicle is determined, so that the sample library is established. In some cases, the sample image may also be directly obtained, for example, an image collected by a camera of the vehicle in real time is directly obtained, a vehicle position area of the image is labeled, and the labeled image is used as the sample image.
Step 420: and inputting the road sample image into a pre-established initial neural network model.
After the road sample image is acquired, the road sample image may be input to a pre-established initial neural network model, so that the initial neural network model is trained by using the road sample image.
In some possible implementations of the embodiment of the present invention, before the road sample image is input into the pre-established initial neural network model, the road sample image may be further scaled to a preset size. Therefore, the initial neural network model can learn the road sample images with the same size, so that the road images can be processed more quickly and accurately, and the training efficiency of the model is improved. In some other possible implementation manners of the embodiment of the present invention, the pre-established initial neural network model may include a spatial pyramid pooling layer, and may be adapted to a picture of any size, and the road sample image may not be scaled, so as to avoid loss of image information. In addition, operations such as mirror image processing, channel transformation and the like can be carried out on the data set image, so that the purpose of expanding the data set is achieved.
Step 430: and training an initial neural network model by using the sample image to obtain a vehicle target detection model.
For ease of understanding, the concept of a neural network model is first briefly introduced. A neural network is a network system formed by a large number of simple processing units widely interconnected, which is a highly complex nonlinear dynamical learning system with massive parallelism, distributed storage and processing, self-organization, self-adaptation and self-learning capabilities. The neural network model is a mathematical model established based on the neural network, and is widely applied in many fields based on the strong learning capacity of the neural network model.
In the field of image processing and pattern recognition, a convolutional neural network model is often used for pattern recognition. Convolutional neural networks are generally composed of an input layer, convolutional layers, pooling layers, fully-connected layers, and the like. Due to the characteristics of partial connection of convolution layers and weight sharing in the convolutional neural network model, parameters needing to be trained are greatly reduced, the network model is simplified, and the training efficiency is improved.
Specifically, in this embodiment, a deep convolutional neural network may be used as an initial neural network model, and the neural network model may be trained by using the marked vehicle sample image. In addition to designing a new deep convolutional neural network by itself, a transfer learning method can be adopted, the existing deep convolutional neural network which obtains a better result in the field of target detection, such as an SSD (Single Shot Multi Box Detector) and the like, is utilized to correspondingly modify the output type quantity and the structures of other parts which possibly need to be modified, the existing fully trained parameters in the original network model are directly adopted as the initial neural network model, a fine tuning method is adopted, and the road sample image is utilized to train the neural network. Specifically, the convolutional layer in the initial neural network model sufficiently learns information such as vehicle shapes and feature points in the road sample image, the fully-connected layer in the initial neural network model can map the relevant features according to the learned relevant features of the vehicle sample image to obtain a recognition result of the vehicle position, and the recognition result is compared with vehicle position information labeled in advance in the road sample image, so that parameters of the initial neural network model can be optimized. In particular, a more common parameter optimization method is the gradient descent method. For supervised training, the difference between the recognition result and the pre-labeled result is expressed by a specific cost function, for example, the cost function commonly used in linear equations is:
Figure GDA0003265743380000141
wherein, yiFor the i-dimension, y, of the vector representing the true annotation resultiAnd the ith dimension of the vector representing the prediction result is output after the input vector is subjected to model fitting, and m represents that the labeling result vector contains m dimensions.
When the network model only contains one parameter theta1I.e. the cost function can be expressed as a piece with a weight theta1A curve in which the cost function value is an independent variable and the cost function value is a dependent variable, and a curve in which theta that minimizes the cost function value is found1The algorithm of values of (a) is called a gradient descent algorithm, which uses a cost function to derive a weight derivative to find theta that enables the cost function to descend most quickly1In which direction theta is updated1Recalculating the cost function, deriving, updating … … to find θ when the gradient is 01A value of (d); generalizing to multivariate functions, i.e. for each parameter θ in the network modeli(where i is 1,2,3,4 …) by taking the derivative and finding the derivative that enablesAnd iterating the change direction of the weight with the fastest descending cost function, wherein each iteration is carried out along the direction with the fastest descending loss function. Therefore, after the initial neural network model is subjected to iterative training of more training samples, a vehicle target detection model can be obtained.
From the above, the present application provides a training method for a vehicle target detection model. The method comprises the steps of obtaining a road sample image, marking information such as vehicle detection frames (vehicle upper left points and vehicle lower right points) covering whole vehicle pixel points in the road sample image, inputting the road sample image into an initial neural network model, and training or fine-tuning the initial neural network model by utilizing the road sample image in a supervised learning mode to obtain a vehicle target detection model. The initial neural network model is trained by adopting the road sample images marked with the vehicle position information, and the large quantity of road sample images can enable the trained vehicle target model to have higher accuracy and efficiency when the vehicle position is predicted.
Next, a specific implementation of the training method for the vehicle surface detection segmentation model provided in the embodiment of the present invention is described.
Fig. 5 is a flowchart of a training method for a vehicle surface segmentation model, which is applied to the field of automatic driving, according to an embodiment of the present invention, and with reference to fig. 5, the method includes:
step 510: and acquiring a vehicle sample image, wherein the vehicle sample image is marked with information such as wheel points, ridge lines, orientation and the like of the vehicle.
The vehicle sample image can be regarded as a sample image for training a vehicle surface detection segmentation model. In the embodiment of the invention, the training model adopts a supervised training mode, so that the information of wheel points, ridges, orientations and the like of the vehicle is marked in the vehicle sample image. By marking information such as wheel points, ridge lines and orientation, the training speed of the model can be increased, and the accuracy of model detection is improved.
Wherein, the wheel point refers to the contact point of the wheel and the ground. It can be intuitively understood that the wheel points must be located on either the left or right side. The ridge line is a ridge line connecting the left and right side surfaces of the vehicle and the front and rear surfaces. The orientation is the orientation when the target vehicle is forward (only one face of the target vehicle is included in the road sample image).
In the embodiment of the invention, the proportion of the vehicle detection frame in the real-time image acquired by the camera is affected by the factors such as the orientation of the vehicle and the angle of the target vehicle relative to the vehicle, so that the subsequent judgment is affected; therefore, the present embodiment adopts a method of dividing the vehicle surface and determining each surface separately.
The image for training the vehicle surface detection segmentation model mainly adopts the vehicle target detection model trained by the vehicle target detection model training method to detect the vehicle in the road environment image acquired by the camera of the vehicle, then segments the detection frame according to different surfaces of the vehicle, and takes the segmented vehicle image as a vehicle sample image. In the embodiment of the invention, a sample library can be established in advance, and a sample image is obtained from the sample library. The sample library can adopt images in public data sets, and can also obtain images collected by a camera of a vehicle from storage equipment of the vehicle, after the position of the vehicle is detected by adopting a vehicle detection method, a vehicle part is cut out, and wheel points, ridge lines and directions of the vehicle in the images are marked, so that the sample library is established. In some cases, the sample image may also be directly obtained, for example, an image acquired by a camera of the vehicle in real time is directly obtained, after the vehicle is detected and segmented, the vehicle image is labeled, and the labeled image is used as the sample image.
Step 520: and inputting the vehicle sample image into a pre-established initial depth regression network model.
After the vehicle sample image is acquired, the vehicle sample image may be input to a pre-established initial depth regression network model, so that the initial depth regression network model may be trained by using the vehicle sample image.
In some possible implementations of embodiments of the invention, the vehicle sample image may also be scaled to a preset size before being input into the pre-established initial depth regression network model. Therefore, the initial depth regression network model can learn the vehicle sample images with the same size, so that the vehicle images can be processed more quickly and accurately, and the training efficiency of the model is improved. In some other possible implementation manners of the embodiment of the present invention, the pre-established initial depth regression network model may include a spatial pyramid pooling layer, and may be adapted to pictures of any size, and the road sample image may not be scaled, so as to avoid loss of image information.
Step 530: and training a depth regression network model by using the sample image to obtain a vehicle surface detection segmentation model.
A deep convolutional neural network can be used as an initial deep regression network model, and the neural network model is trained by utilizing a vehicle sample image. In addition to designing a new deep convolutional Neural Network by itself, a transfer learning method can be adopted, the existing deep convolutional Neural Network which obtains a better result in the object detection field, such as fast Regions with contribution Neural Network (Faster convolutional Neural Network) and the like, is utilized to correspondingly modify the output type quantity and the structures of other parts which possibly need to be modified, existing fully trained parameters in the original Network model are directly adopted as an initial deep convolutional Network model, a fine tuning method is adopted, and a vehicle sample image is utilized to train the Neural Network. Specifically, the convolution layer in the initial depth regression network model sufficiently learns the characteristics of wheel points, ridge lines, orientation and the like in the vehicle sample image, the full connection layer in the initial depth regression network model can map the relevant characteristics according to the learned relevant characteristics of the vehicle sample image to obtain the recognition results of the wheel point position, the ridge line position and the vehicle orientation, the recognition results are compared with the wheel point position, the ridge line position and the vehicle orientation which are labeled in advance in the vehicle sample image, the parameters of the initial depth regression network model can be optimized, and after the initial depth regression network model is subjected to iterative training of more training samples, the vehicle surface detection segmentation model can be obtained.
From the above, the present application provides a training method for a vehicle surface detection segmentation model. And obtaining a vehicle sample image, marking information such as wheel points, ridge lines, vehicle orientation and the like in the vehicle sample image, inputting the vehicle sample image into the initial depth regression network model, and training or fine-tuning the initial depth regression network model by using the vehicle sample image in a supervised learning mode to obtain a vehicle surface detection segmentation model. The initial depth regression network model is trained by the vehicle sample images marked with the wheel point positions, the ridge lines and the vehicle orientations, and the vehicle surface detection segmentation model obtained by training has high accuracy and efficiency when the vehicle surface is predicted by the aid of a large number of vehicle sample images.
The fatigue monitoring and state analysis model in the above embodiments mainly uses a convolutional neural network model as a basis, performs analysis processing on a driver real-time image, extracts a face region and face key points, and judges whether the driver has behaviors such as eye closure, yawning by opening one's mouth, and the like based on a face key point posture classification network trained in advance, so as to perform quantitative classification on the driving mental state of the driver. With the continuous development of machine learning, the convolutional neural network model used in the present embodiment is also continuously developed. In particular, different types of convolutional neural networks may be employed as the initial neural network based on the function of the model to be trained and the data to be processed by the model. Common convolutional Neural networks for object detection include R-CNN (Regions with convolutional Neural Network), Fast R-CNN (Fast Regions with convolutional Neural Network), R-FCN (Region-based convolutional Neural Network, more Fast Regions-based convolutional Neural Network), R-FCN (Region-based convolutional Network, Region-based fully convolutional Network), YOLO (young only local area, implementing object detection system), YOLO9000, SSD (single slot multi detector), Network Architecture (Neural Search Network), and Mask R-n, etc. In some possible implementations, the SSD may be used as an initial neural network model, and after a part of the structure is modified, the SSD is trimmed and then trained. In some possible implementations, other convolutional neural networks as mentioned above may be used, or other networks that achieve better results in this area may be used. The embodiments of the present application are not limited in any way in this respect.
EXAMPLE five
Referring to fig. 6, fig. 6 is a schematic structural diagram of an apparatus for adjusting a viewing angle of a camera according to an embodiment of the present invention, which can be applied to the field of automatic driving, as shown in fig. 6, the apparatus includes: a road image acquisition module 610, a local detection frame determination module 620, and a field angle adjustment module 630, wherein,
a road image obtaining module 610, configured to obtain a road image that includes a target vehicle and is collected by a current camera;
a local detection frame determining module 620, configured to split the road image according to different faces of the target vehicle, so as to obtain a local detection frame of each face of the target vehicle in the split image;
and the field angle adjusting module 630 is configured to determine, for any local detection frame, a ratio of the local detection frame to the split image, and determine an adjustment mode of the field angle of the current camera according to the ratio.
According to the technical scheme provided by the embodiment, after the road image which is acquired by the current camera and contains the target vehicle is acquired, the road image is split according to different faces of the target vehicle, and the local detection frame of each face of the target vehicle in the split image can be acquired. For any local detection frame, the adjustment mode of the current camera angle can be determined according to the proportion value of the local detection frame in the split image, so that the camera angle can be correspondingly adjusted according to the change of the observation image, and enough surrounding environment information can be captured to judge the subsequent driving strategy.
On the basis of the foregoing embodiment, the local detection frame determining module 620 is specifically configured to:
identifying the road image based on a preset vehicle target detection model to obtain an integral detection frame of the target vehicle;
extracting the target vehicle according to the overall detection frame, and splitting the road image according to different faces of the target vehicle by taking a characteristic line of the target vehicle as a base line based on a vehicle face detection segmentation model to obtain a local detection frame of each face of the target vehicle in the split image;
wherein the characteristic lines of the target vehicle include a length, a width, and a height of the vehicle.
On the basis of the above embodiment, the viewing angle adjusting module 630 includes:
the field angle increasing unit is used for increasing the field angle of the current camera if the proportional value is larger than a preset first proportional threshold; alternatively, the first and second electrodes may be,
the field angle reducing unit is used for reducing the field angle of the current camera if the proportion value is smaller than a preset second proportion threshold;
wherein the preset second ratio threshold is smaller than the preset first ratio threshold.
On the basis of the above embodiment, the viewing angle increasing unit is specifically configured to:
if the focal length of the current camera is adjustable, reducing the focal length of the current camera to increase the field angle of the current camera; alternatively, the first and second electrodes may be,
and if the focal length of the current camera is not adjustable, selecting a target camera with the focal length smaller than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
On the basis of the above embodiment, the viewing angle reducing unit is specifically configured to:
if the focal length of the current camera is adjustable, increasing the focal length of the current camera to reduce the field angle of the current camera; alternatively, the first and second electrodes may be,
and if the focal length of the current camera is not adjustable, selecting a target camera with the focal length larger than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
On the basis of the above embodiment, the apparatus further includes:
the relative speed and relative angle determining module is used for determining the relative speed and/or relative angle between the target vehicle and the current vehicle where the current camera is located;
and the adjusting mode determining module is used for determining the adjusting mode of the current camera view angle according to the relative speed and/or the relative angle.
On the basis of the above embodiment, the method is characterized in that:
the preset vehicle target detection model is obtained by training a pre-established initial neural network model by using a road sample image marked with a vehicle position.
On the basis of the above embodiment, the method is characterized in that:
the vehicle surface detection segmentation model is obtained by training a pre-established initial depth regression network model by using a vehicle sample image labeled with vehicle characteristic information.
Wherein the vehicle characteristic information includes a vehicle wheel point, a characteristic line, and an orientation.
The camera angle adjusting device provided by the embodiment of the invention can execute the camera angle adjusting method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the executing method. Technical details that are not described in detail in the above embodiments may be referred to a method for adjusting a field angle of a camera provided in any embodiment of the present invention.
EXAMPLE six
Referring to fig. 7, fig. 7 is a schematic structural diagram of a vehicle-mounted terminal according to an embodiment of the present invention. As shown in fig. 7, the in-vehicle terminal may include:
a memory 701 in which executable program code is stored;
a processor 702 coupled to the memory 701;
the processor 702 calls the executable program code stored in the memory 701 to execute the method for adjusting the field angle of the camera according to any embodiment of the present invention.
The embodiment of the invention discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute the method for adjusting the field angle of a camera provided by any embodiment of the invention.
The embodiment of the invention discloses a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the method for adjusting the angle of view of the camera provided by any embodiment of the invention.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to A" means that B is associated with A from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The driving strategy generating method and device based on the automatic driving electronic navigation map disclosed by the embodiment of the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A method for adjusting the field angle of a camera is characterized by comprising the following steps:
acquiring a road image which is acquired by a current camera and contains a target vehicle;
splitting the road image according to different faces of the target vehicle to obtain a local detection frame of each face of the target vehicle in the split image, wherein the local detection frame is determined by coordinates of an upper left point and a lower right point of a pixel range occupied by a certain face of the target vehicle in the split image, a rectangular region drawn by the two coordinates is used as the local detection frame, and the local detection frame contains all pixel points of the face of the target vehicle, and the road image is split according to different faces of the target vehicle to obtain the local detection frame of each face of the target vehicle in the split image, and the method comprises the following steps:
identifying the road image based on a preset vehicle target detection model to obtain an integral detection frame of the target vehicle;
extracting the target vehicle according to the overall detection frame, and splitting the road image according to different faces of the target vehicle by taking a characteristic line of the target vehicle as a baseline on the basis of a vehicle face detection segmentation model to obtain a local detection frame of each face of the target vehicle in the split image, wherein the characteristic line of the target vehicle comprises the length, width and height of the vehicle;
for any local detection frame, determining a proportion value of the local detection frame in the split image, and determining an adjustment mode of a current camera angle according to the proportion value, wherein the determination of the adjustment mode of the current camera angle according to the proportion value comprises:
if the ratio value is larger than a preset first ratio threshold, increasing the field angle of the current camera; alternatively, the first and second electrodes may be,
if the proportion value is smaller than a preset second proportion threshold value, reducing the field angle of the current camera;
wherein the preset second ratio threshold is smaller than the preset first ratio threshold.
2. The method of claim 1,
the increasing of the field angle of the current camera comprises the following steps:
if the focal length of the current camera is adjustable, reducing the focal length of the current camera to increase the field angle of the current camera; or if the focal length of the current camera is not adjustable, selecting a target camera with a focal length smaller than that of the current camera as a running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera;
the reducing of the field angle of the current camera comprises the following steps:
if the focal length of the current camera is adjustable, increasing the focal length of the current camera to reduce the field angle of the current camera; or if the focal length of the current camera is not adjustable, selecting a target camera with the focal length larger than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
determining the relative speed and/or the relative angle of the target vehicle and the current vehicle where the current camera is located;
and determining the adjustment mode of the current camera angle of view according to the relative speed and/or the relative angle.
4. The method of claim 1 or 2, wherein after adjusting the current camera field angle, the method further comprises:
verifying whether the currently selected camera or the focal length of the current camera meets a preset proportional threshold or not according to the proportional value of the local detection frame in the split image obtained after the field angle is adjusted; and if not, continuing to execute the operation of adjusting the field angle of the current camera until the proportion value of the vehicle local detection frame in the split image meets the preset proportion threshold.
5. The method according to claim 1 or 2, wherein the operation of adjusting the current camera view angle comprises:
gradually adjusting the field angle of the current camera according to a set step length; alternatively, the first and second electrodes may be,
and if the distance between the front vehicle and the rear vehicle reaches a preset distance, switching the current field angle to the maximum field angle.
6. An apparatus for adjusting a field angle of a camera, comprising:
the road image acquisition module is used for acquiring a road image which is acquired by a current camera and contains a target vehicle;
the local detection frame determining module is used for splitting the road image according to different faces of the target vehicle to obtain a local detection frame of each face of the target vehicle in the split image, the local detection frame is determined by coordinates of an upper left point and a lower right point of a pixel range occupied by a certain face of the target vehicle in the split image, a rectangular area drawn by the two coordinates is used as the local detection frame, and all pixel points of the face of the target vehicle are contained in the local detection frame;
the field angle adjusting module is used for determining the proportion value of any local detection frame in the split image and determining the adjusting mode of the field angle of the current camera according to the proportion value;
the local detection frame determination module is specifically configured to:
identifying the road image based on a preset vehicle target detection model to obtain an integral detection frame of the target vehicle;
extracting the target vehicle according to the overall detection frame, and splitting the road image according to different faces of the target vehicle by taking a characteristic line of the target vehicle as a base line based on a vehicle face detection segmentation model to obtain a local detection frame of each face of the target vehicle in the split image;
wherein the characteristic lines of the target vehicle include a length, a width, and a height of the vehicle;
the viewing angle adjusting module includes:
the field angle increasing unit is used for increasing the field angle of the current camera if the proportional value is larger than a preset first proportional threshold; alternatively, the first and second electrodes may be,
the field angle reducing unit is used for reducing the field angle of the current camera if the proportion value is smaller than a preset second proportion threshold;
wherein the preset second ratio threshold is smaller than the preset first ratio threshold.
7. The apparatus according to claim 6, wherein the viewing angle increasing unit is specifically configured to:
if the focal length of the current camera is adjustable, reducing the focal length of the current camera to increase the field angle of the current camera; or if the focal length of the current camera is not adjustable, selecting a target camera with a focal length smaller than that of the current camera as a running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera;
the viewing angle reducing unit is specifically configured to:
if the focal length of the current camera is adjustable, increasing the focal length of the current camera to reduce the field angle of the current camera; or if the focal length of the current camera is not adjustable, selecting a target camera with the focal length larger than that of the current camera as the running camera so as to adjust the field angle of the current camera to the field angle corresponding to the target camera.
8. The apparatus of claim 6 or 7, further comprising:
the relative speed and relative angle determining module is used for determining the relative speed and/or relative angle between the target vehicle and the current vehicle where the current camera is located;
and the adjusting mode determining module is used for determining the adjusting mode of the current camera view angle according to the relative speed and/or the relative angle.
9. The apparatus of claim 6, wherein:
the vehicle surface detection segmentation model is obtained by training a pre-established initial depth regression network model by using a vehicle sample image labeled with vehicle characteristic information;
wherein the vehicle characteristic information includes a vehicle wheel point, a characteristic line, and an orientation.
CN201910076012.5A 2019-01-26 2019-01-26 Method and device for adjusting field angle of camera Active CN111491093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910076012.5A CN111491093B (en) 2019-01-26 2019-01-26 Method and device for adjusting field angle of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910076012.5A CN111491093B (en) 2019-01-26 2019-01-26 Method and device for adjusting field angle of camera

Publications (2)

Publication Number Publication Date
CN111491093A CN111491093A (en) 2020-08-04
CN111491093B true CN111491093B (en) 2021-12-31

Family

ID=71795774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910076012.5A Active CN111491093B (en) 2019-01-26 2019-01-26 Method and device for adjusting field angle of camera

Country Status (1)

Country Link
CN (1) CN111491093B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967370B (en) * 2020-08-12 2021-12-07 广州小鹏自动驾驶科技有限公司 Traffic light identification method and device
CN111950504B (en) * 2020-08-21 2024-04-16 东软睿驰汽车技术(沈阳)有限公司 Vehicle detection method and device and electronic equipment
CN112507862B (en) * 2020-12-04 2023-05-26 东风汽车集团有限公司 Vehicle orientation detection method and system based on multitasking convolutional neural network
CN112734831A (en) * 2021-01-04 2021-04-30 广州小鹏自动驾驶科技有限公司 Labeling method and device
CN113942458B (en) * 2021-10-29 2022-07-29 禾多科技(北京)有限公司 Control method, device, equipment and medium for vehicle-mounted camera adjusting system
CN117730527A (en) * 2022-05-16 2024-03-19 深圳市大疆创新科技有限公司 Control method and device of cradle head, movable platform and storage medium
CN115550876B (en) * 2022-08-16 2023-05-30 北京连山科技股份有限公司 5G and ad hoc network integrated unmanned vehicle communication system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102342088A (en) * 2009-03-31 2012-02-01 爱信精机株式会社 Calibration Index For Use In Calibration Of Onboard Camera, Method Of Onboard Camera Calibration Using The Calibration Index And Program For Calibration Apparatus For Onboard Camera Using The Calibration Index
JP2014220757A (en) * 2013-05-10 2014-11-20 富士通セミコンダクター株式会社 Image processing device, and program and method thereof
CN107380164A (en) * 2016-07-07 2017-11-24 小蚁科技(香港)有限公司 Driver assistance system and support system based on computer vision
CN208029008U (en) * 2018-03-30 2018-10-30 比亚迪股份有限公司 Vehicle-mounted pick-up head system and vehicle with it
US10116873B1 (en) * 2015-11-09 2018-10-30 Ambarella, Inc. System and method to adjust the field of view displayed on an electronic mirror using real-time, physical cues from the driver in a vehicle
CN208337720U (en) * 2018-06-11 2019-01-04 昆山星际舟智能科技有限公司 Vehicle-mounted camera Automatic zoom lens focusing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102342088A (en) * 2009-03-31 2012-02-01 爱信精机株式会社 Calibration Index For Use In Calibration Of Onboard Camera, Method Of Onboard Camera Calibration Using The Calibration Index And Program For Calibration Apparatus For Onboard Camera Using The Calibration Index
JP2014220757A (en) * 2013-05-10 2014-11-20 富士通セミコンダクター株式会社 Image processing device, and program and method thereof
US10116873B1 (en) * 2015-11-09 2018-10-30 Ambarella, Inc. System and method to adjust the field of view displayed on an electronic mirror using real-time, physical cues from the driver in a vehicle
CN107380164A (en) * 2016-07-07 2017-11-24 小蚁科技(香港)有限公司 Driver assistance system and support system based on computer vision
CN208029008U (en) * 2018-03-30 2018-10-30 比亚迪股份有限公司 Vehicle-mounted pick-up head system and vehicle with it
CN208337720U (en) * 2018-06-11 2019-01-04 昆山星际舟智能科技有限公司 Vehicle-mounted camera Automatic zoom lens focusing device

Also Published As

Publication number Publication date
CN111491093A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111491093B (en) Method and device for adjusting field angle of camera
CN108171112B (en) Vehicle identification and tracking method based on convolutional neural network
CN110942000B (en) Unmanned vehicle target detection method based on deep learning
Ghanem et al. Lane detection under artificial colored light in tunnels and on highways: an IoT-based framework for smart city infrastructure
US9607228B2 (en) Parts based object tracking method and apparatus
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
US9626599B2 (en) Reconfigurable clear path detection system
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN115049700A (en) Target detection method and device
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
Farag et al. An advanced vehicle detection and tracking scheme for self-driving cars
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN112001378B (en) Lane line processing method and device based on feature space, vehicle-mounted terminal and medium
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
Liu et al. Multi-type road marking recognition using adaboost detection and extreme learning machine classification
CN111098850A (en) Automatic parking auxiliary system and automatic parking method
Jiang et al. Deep transfer learning enable end-to-end steering angles prediction for self-driving car
CN114763136A (en) Guide vehicle driving auxiliary system based on deep learning
CN111178181B (en) Traffic scene segmentation method and related device
CN117392638A (en) Open object class sensing method and device for serving robot scene
Farag A fast and reliable balanced approach for detecting and tracking road vehicles
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
CN112686136A (en) Object detection method, device and system
Yudokusumo et al. Design and implementation program identification of traffic form in self driving car robot
CN111815667B (en) Method for detecting moving target with high precision under camera moving condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211217

Address after: 215100 floor 23, Tiancheng Times Business Plaza, No. 58, qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou, Jiangsu Province

Applicant after: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.

Address before: Room 601-a32, Tiancheng information building, No. 88, South Tiancheng Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: MOMENTA (SUZHOU) TECHNOLOGY Co.,Ltd.