CN117495754A - Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle - Google Patents

Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle Download PDF

Info

Publication number
CN117495754A
CN117495754A CN202210856826.2A CN202210856826A CN117495754A CN 117495754 A CN117495754 A CN 117495754A CN 202210856826 A CN202210856826 A CN 202210856826A CN 117495754 A CN117495754 A CN 117495754A
Authority
CN
China
Prior art keywords
target
image
vehicle
overlapping region
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210856826.2A
Other languages
Chinese (zh)
Inventor
郭玉杰
陈翰军
陈现岭
刘贵波
王运航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Great Wall Motor Co Ltd
Original Assignee
Great Wall Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Great Wall Motor Co Ltd filed Critical Great Wall Motor Co Ltd
Priority to CN202210856826.2A priority Critical patent/CN117495754A/en
Publication of CN117495754A publication Critical patent/CN117495754A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to a vehicle blind area detection method, a device, electronic equipment, a storage medium and a vehicle, wherein the method comprises the steps of respectively acquiring a first target overlapping area and a second target overlapping area; according to the first target overlapping area and the second target overlapping area, performing stitching processing on the first target image, the second target image and the third target image to generate a first target stitched image, wherein the first target stitched image is used for displaying a sight blind area of a driver of a target vehicle; and displaying the first target spliced image at a preset position in the target vehicle. According to the method and the device, the blind area of the driver, which is shielded by the vehicle frame, is restored and displayed in the target vehicle on the premise that any hardware is not added and original equipment is changed, so that the blind area of the vehicle is displayed, the blind area of the vehicle is intuitively displayed for the driver, the safety performance of the vehicle is improved, and the cost of the vehicle is reduced.

Description

Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle
Technical Field
The application relates to the technical field of vehicles, in particular to a vehicle blind area detection method, a device, electronic equipment, a storage medium and a vehicle.
Background
Along with the continuous development of economy, the number of private cars is increased gradually, so that road traffic accidents are increased, and part of traffic accidents are caused by dead zones of vehicle frames, such as dead zones caused by vehicle A columns, in the prior art, the field of view is increased by increasing hollows in the middle of the vehicle frames, but compared with the design of the original vehicle frames, the method can reduce the safety of the cars.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a vehicle blind area detection method and device, electronic equipment, a storage medium and a vehicle.
According to a first aspect of an embodiment of the present application, there is provided a vehicle blind area detection method, including:
respectively acquiring a first target overlapping region and a second target overlapping region, wherein the first target overlapping region is a target overlapping region of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping region is a target overlapping region of a second target image and a third target image corresponding to the target vehicle;
according to the first target overlapping area and the second target overlapping area, the first target image, the second target image and the third target image are subjected to splicing processing to generate a first target spliced image, wherein the first target spliced image is used for displaying a sight blind area of a driver of the target vehicle;
And displaying the first target spliced image at a preset position in the target vehicle.
Optionally, after the step of displaying the first target stitched image at a preset position in the target vehicle, the method further includes:
identifying all target objects in the first target mosaic image based on a preset target detection model;
predicting the moving track of each target object based on a preset prediction model;
determining a target state corresponding to each target object according to the moving track of each target object;
rendering the first target spliced image according to the target state corresponding to each target object to generate a second target spliced image;
and displaying the second target spliced image on an instrument panel of the target vehicle.
Optionally, before the step of acquiring the first target overlapping region and the second target overlapping region, the method further includes:
and under the condition that the signal lamp exists in the third target image of the target vehicle, respectively acquiring the first target image and the second target image.
Optionally, the determining the target state corresponding to each target object according to the moving track of each target object includes:
Judging whether the target object approaches to the target vehicle or not according to the moving track of each target object;
if yes, marking the target object as a dangerous state;
and if not, marking the target object as a safe state.
Optionally, after the step of determining the target state corresponding to each target object according to the movement track of each target object, the method further includes:
and sending an early warning information prompt to the target vehicle under the condition that the target state corresponding to the target object is detected to be a dangerous state.
Optionally, the stitching the first target image, the second target image, and the third target image according to the first target overlapping area and the second target overlapping area, and generating a first target stitched image includes:
respectively acquiring a first overlapping region and a second overlapping region, wherein the first overlapping region is an overlapping region corresponding to the first target overlapping region in the first target image, and the second overlapping region is an overlapping region corresponding to the second target overlapping region in the second target image;
matching in the third target image according to the first overlapping region, when the difference value between each pixel position corresponding to the first overlapping region and each pixel position corresponding to the third target image is smaller than a preset threshold value, obtaining a first target matching position, and matching in the third target image according to the second overlapping region, and when the difference value between each pixel position corresponding to the second overlapping region and each pixel position corresponding to the third target image is smaller than a preset threshold value, obtaining a second target matching position;
And carrying out translation splicing processing on the first target image according to the first target matching position, and carrying out translation splicing processing on the second target image according to the second target matching position, so as to generate the first target spliced image.
Optionally, predicting the movement track of each target object based on the preset prediction model includes:
acquiring the position coordinates of each target object according to the preset target detection model;
and inputting the position coordinates of each target object into the preset prediction model, and predicting the movement track of each target object.
In a second aspect, there is provided a vehicle blind area detection apparatus including:
the acquisition module is used for respectively acquiring a first target overlapping area and a second target overlapping area, wherein the first target overlapping area is a target overlapping area of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping area is a target overlapping area of a second target image and a third target image corresponding to the target vehicle;
the splicing module is used for carrying out splicing processing on the first target image, the second target image and the third target image according to the first target overlapping area and the second target overlapping area to generate a first target spliced image, wherein the first target spliced image is used for displaying a sight blind area of a driver of the target vehicle;
And the display module is used for displaying the first target spliced image at a preset position in the target vehicle.
According to a third aspect of embodiments of the present application, there is provided an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the vehicle blind zone detection method as described in the first aspect.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium, which when executed by a processor of a mobile terminal, causes the mobile terminal to perform the vehicle blind zone detection method according to the first aspect of the present application.
According to a fifth aspect of embodiments of the present application, there is provided a vehicle comprising an electronic device according to the third aspect of the present application.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
according to the method and the device, image splicing processing is carried out on the first target image, the second target image and the third target image which are acquired according to different acquisition equipment of the target vehicle, the generated first target spliced image is used for displaying a blind area of the sight of a driver of the target vehicle, and the first target spliced image is displayed at a position where the sight of the driver can be observed in the target vehicle, so that the part which is shielded by the vehicle frame and the sight of the driver can be completely spliced on the basis that the camera is not added to the target vehicle, the whole visual angle of the driver can be restored, and the method and the device can restore the blind area of the driver and display the blind area in the target vehicle on the premise that any hardware is not added and original equipment is changed, so that the blind area of the vehicle is displayed, the blind area of the vehicle is intuitively displayed for the driver, the safety performance of the vehicle is improved, and the cost of the vehicle is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is one of the flowcharts of a vehicle blind zone detection method according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating an application scenario of a vehicle blind zone detection method according to an exemplary embodiment;
FIG. 3 is one of the image stitching schematics in the vehicle blind zone detection method, according to an exemplary embodiment;
FIG. 4 is a second schematic view of image stitching in a vehicle blind zone detection method according to an exemplary embodiment;
FIG. 5 is a third schematic view of image stitching in a vehicle blind zone detection method according to an exemplary embodiment;
FIG. 6 is a fourth schematic diagram of image stitching in a vehicle blind zone detection method, according to an exemplary embodiment;
FIG. 7 is a second flowchart illustrating a method of detecting a blind spot of a vehicle according to an exemplary embodiment;
FIG. 8 is a flowchart of step 102 in one of the flowcharts of the vehicle blind zone detection method shown in FIG. 1 according to an exemplary embodiment;
FIG. 9 is a flowchart of step 105 in one of the flowcharts of the vehicle blind zone detection method shown in FIG. 1 according to an exemplary embodiment;
FIG. 10 is a third flowchart illustrating a method of detecting a blind spot of a vehicle according to an exemplary embodiment;
fig. 11 is a block diagram of a vehicle blind area detection apparatus according to an exemplary embodiment;
fig. 12 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
It should be noted that, in the embodiment of the present application, the vehicle frame may cause a visual blind area to a certain extent, especially in the case that the vehicle speed is slower, for example, as shown in fig. 2, fig. 2 is a schematic diagram of an application scenario of the vehicle blind area detection method according to an exemplary embodiment, a hollow circle inside the vehicle in the drawing represents the driver position of the vehicle, L represents the left a pillar of the vehicle, R represents the right a pillar of the vehicle, both the left a pillar of the vehicle and the right a pillar of the vehicle may have a certain influence on the sight line of the driver, and since the vehicle a pillar (the pillar between the windshield and the left and right front doors of the vehicle) shields the driver, the vehicle can suddenly start to collide with a pedestrian that does not pass through the zebra line when the vehicle turns left or right and the red light changes to the green light, although the a pillar occupies less overall frame of the vehicle, and the a pillar can completely shield an adult when the driver looks outside the vehicle.
A first embodiment of the present application relates to a vehicle blind area detection method, referring to fig. 1, fig. 1 is a flowchart of a vehicle blind area detection method provided by an embodiment of the present application, including the following steps:
step 101, respectively acquiring a first target overlapping area and a second target overlapping area, wherein the first target overlapping area is a target overlapping area of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping area is a target overlapping area of a second target image and a third target image corresponding to the target vehicle;
it should be noted that, in general, cameras are installed on a vehicle, for example, the vehicle may distribute front cameras, left cameras and right cameras, the front camera captures images of the front side of the vehicle in the driving direction, the left and right cameras are respectively distributed on two sides of the vehicle, the left camera captures images of the left side of the vehicle in the driving direction, the right camera captures images of the right side of the vehicle in the driving direction, and because the positions of the three cameras are fixed, three viewing angles have fixed repeated parts, so that in order to be convenient for a person skilled in the art to understand, in the embodiment of the present application, the first target image corresponding to the target vehicle may be an image captured by the left camera of the vehicle, the second target image may be an image captured by the right camera of the vehicle, and the third target image may be an image captured by the front camera of the vehicle.
The first target overlapping region where the first target image and the third target image overlap, and the second target overlapping region where the second target image and the third target image overlap, respectively, may be acquired, specifically, by feature comparison, for example, scale-invariant feature transform, SIFT, in this embodiment of the application, since the cameras acquiring three images in the vehicle are horizontally adjacent, image stitching may be implemented based on a template matching method, which is essentially to search for a best matching (similar) portion with another template image in one image, one region of interest (region of interest, ROI) may be selected in the first target image, as shown in fig. 3, 4, 5, the first target image is displayed in fig. 3, in fig. 4, a third target image is shown, in fig. 5, a second target image is shown, a part of the three images, which is framed by black and white lines, is an acquired overlapping ROI, in the field of image processing, the ROI is an image region selected from the images, the region is focused on target image analysis, the region is calibrated for subsequent further processing of the target image, the target region in the target image is defined by the ROI, and for subsequent processing, the processing time can be reduced, the accuracy can be increased, therefore, the target overlapping region can be further defined according to the ROI and used as a template for preparing for the next image stitching processing besides the target overlapping region obtained by performing feature comparison matching by using SIFT.
And 102, according to the first target overlapping area and the second target overlapping area, performing stitching processing on the first target image, the second target image and the third target image to generate a first target stitched image, wherein the first target stitched image is used for displaying a sight blind area of a driver of the target vehicle.
When the first target overlapping area and the second target overlapping area are determined, the first overlapping area and the second overlapping area may be used as templates, template matching may be performed in the third target image, a position with the highest matching degree is found to be the best matching position, then the pixel distance of the target image that needs to be translated again and on the column is calculated according to the best matching position, translation processing is performed on the first target image and the second target image according to the calculated pixel distance, image stitching is implemented, and a first target stitched image is generated, as shown in fig. 6, and fig. 6 is the first target stitched image after the first target image, the second target image and the third target image are subjected to image stitching.
Specifically, as shown in fig. 8, fig. 8 is a flowchart of step 102 in one of the flowcharts of the vehicle blind area detection method shown in fig. 1 according to an exemplary embodiment, including the steps of:
Step 1021, respectively acquiring a first overlapping region and a second overlapping region, wherein the first overlapping region is an overlapping region corresponding to the first target overlapping region in the first target image, and the second overlapping region is an overlapping region corresponding to the second target overlapping region in the second target image.
Step 1022, matching is performed in the third target image according to the first overlapping area, when the difference between each pixel position corresponding to the first overlapping area and each pixel position corresponding to the third target image is smaller than a preset threshold, a first target matching position is obtained, matching is performed in the third target image according to the second overlapping area, and when the difference between each pixel position corresponding to the second overlapping area and each pixel position corresponding to the third target image is smaller than a preset threshold, a second target matching position is obtained.
Step 1023, performing translational stitching processing on the first target image according to the first target matching position, and performing translational stitching processing on the second target image according to the second target matching position, so as to generate a first target stitched image.
It should be noted that, whether the first target image and the third target image are stitched, or the second target image and the third target image are stitched, the methods of the two are the same, and for the sake of understanding for those skilled in the art, the first target image and the third target image are stitched as an example and will be described in detail herein.
After the first target overlapping area is calibrated through the ROI, any one area can be selected from the first target overlapping area as a template, that is, the first overlapping area in the first target image is acquired, the first overlapping area is necessarily smaller than or equal to the first target overlapping area (the size of the template image is not larger than that of the original image, the number of columns and the number of rows are not larger), the first overlapping area is used as the template to perform matching in the third target image, it is required to be explained that the image in the template is moved one pixel at a time (from left to right), metric calculation is performed once at each position to indicate that the image is matched with that position (or the similarity of the specific areas of the template image and the original image is higher, the better the similarity is spliced), for each position of the template image covered on the original image, the calculated metric value is stored in the result image R, each position (x, y) in the R contains a matching metric value (degree of matching), wherein the maximum value in the metric value is the maximum value is 0 or the minimum value is required to perform matching in any case of the matching algorithm, and the matching algorithm is required to be implemented. And finally, the best matching position is positioned by acquiring the maximum value in the result image, in summary, the pixel value of a certain position (x, y) in the result matrix image R represents the matching degree of the template image and the original image (x, y), namely the matching degree is carried out in the third target image according to the first overlapping area, when the difference value between each pixel position corresponding to the first overlapping area and each pixel position corresponding to the third target image is smaller than a preset threshold value, the first target matching position is obtained, wherein the difference value between each pixel position corresponding to the first overlapping area and each pixel position corresponding to the third target image can be stored in the result matrix R, the best matching position is obtained according to the set threshold value, namely the first target matching position, the first target image is processed in a translation mode according to the first target matching position, and the first target image and the third target image are spliced.
Similarly, the implementation process of the stitching of the second target image and the third target image is the same as the above method, and will not be described herein again.
And step 103, displaying the first target spliced image at a preset position in the target vehicle.
It should be noted that, in step 102, a first target stitched image is generated, where the first target stitched image is used to display a blind area of the sight line of the driver of the target vehicle, and the preset position in the target vehicle is a position where the sight line of the driver of the vehicle can be observed, for example, a dashboard of the vehicle, or a multimedia screen of the vehicle, so that the portion of the target vehicle, which is blocked by the vehicle frame from the sight line of the driver, can be completely stitched together without adding a camera to achieve the restoration of the total viewing angle of the driver.
According to the method and the device, image splicing processing is carried out on the first target image, the second target image and the third target image which are acquired according to different acquisition equipment of the target vehicle, the generated first target spliced image is used for displaying a blind area of the sight of a driver of the target vehicle, and the first target spliced image is displayed at a position where the sight of the driver can be observed in the target vehicle, so that the part which is shielded by the vehicle frame and the sight of the driver can be completely spliced on the basis that the camera is not added to the target vehicle, the whole visual angle of the driver can be restored, and the method and the device can restore the blind area of the driver and display the blind area in the target vehicle on the premise that any hardware is not added and original equipment is changed, so that the blind area of the vehicle is displayed, the blind area of the vehicle is intuitively displayed for the driver, the safety performance of the vehicle is improved, and the cost of the vehicle is reduced.
A second embodiment of the present application relates to a vehicle blind area detection method, referring to fig. 7, and fig. 7 is a flowchart of a vehicle blind area detection method provided by an embodiment of the present application, including the following steps:
step 101, respectively acquiring a first target overlapping area and a second target overlapping area, wherein the first target overlapping area is a target overlapping area of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping area is a target overlapping area of a second target image and a third target image corresponding to the target vehicle;
102, according to the first target overlapping area and the second target overlapping area, performing stitching processing on the first target image, the second target image and the third target image to generate a first target stitched image, wherein the first target stitched image is used for displaying a sight blind area of a driver of a target vehicle;
and step 103, displaying the first target spliced image at a preset position in the target vehicle.
Steps 101-103 are discussed with reference to the foregoing, and are not repeated here.
And step 104, identifying all target objects in the first target mosaic image based on the preset target detection model.
In step 102, a first target stitched image is obtained, and then target detection needs to be performed on the first target stitched image, and it should be noted that in this embodiment of the present application, the preset target detection model may be a target detection model set up by using a yolov3spp neural network, and detection may be performed on the first target stitched image by using the yolov3spp network, or identification of a target object on the first target stitched image, for example, a signal lamp, a pedestrian, a vehicle, an animal, and other different target objects, which are not applied for the preset target detection model without specific limitation.
The method comprises the steps that a preset target detection model is used for identifying, meanwhile, the coordinate position and the score corresponding to each target object in a target spliced image are output, the data cannot be displayed on a first target spliced image, the score of each target object is used for removing the condition that some network detection effects are not good through the score, the coordinate position of each target object is used for simply judging one condition of other target objects and a vehicle by the position, and a position basis is provided for follow-up tracking prediction of the target object.
It should be noted that, the image data collected by the vehicle camera is in the form of video, and the video includes an image frame of one frame, so that the object detection essentially processes the image frame, and the dynamic object detection or tracking is realized by detecting continuous frames or interval frames.
And step 105, predicting the moving track of each target object based on a preset prediction model.
In step 103, target object detection is performed on the target stitched image according to the preset target detection model, each target object is identified, tracking prediction is performed on each target object based on the preset prediction model, and the movement track of each target object is predicted.
Specifically, the motion trajectory of each target object is predicted, as shown in fig. 9, fig. 9 is a flowchart of step 105 in one of the flowcharts of the vehicle blind area detection method shown in fig. 1 according to an exemplary embodiment, including the steps of:
in step 1051, the position coordinates of each target object are obtained according to the preset target detection model.
In step 1052, the position coordinates of each target object are input into a preset prediction model to predict the movement track of each target object.
It should be noted that, in the embodiment of the present application, the preset target detection model may be to track the detected target by using a kalman filter, and filter the possible danger of the tracked target moving track.
The Kalman filtering is an algorithm which enables obtained data to continuously approach actual data, data close to the actual data is obtained by fusing state variables and observed variables, the data is used as the state variables of the next process, the data are fused with the observed variables of the next process, and the final data are very close to the actual data in a reciprocating manner.
Therefore, in the embodiment of the application, the coordinate position of each target object is input into the preset prediction model through the position coordinates of the target object acquired in the preset target detection model, as the input of the preset prediction model, the system state is optimally estimated through the input and output of the observation data of the system, and the data acquired by the vehicle camera can be updated and processed in real time through the Kalman filter. Specifically, the next state can be predicted according to the current state and related data by using the position coordinates of the target objects, the preset speed, the speed obtained by the actual sensor and other data as the current state, so that the movement track of each target object is predicted.
And 106, determining the target state corresponding to each target object according to the moving track of each target object.
It should be noted that, according to the prediction result of the movement track of each target object obtained in step 104 and the positional relationship between the target vehicles, it is determined whether the target object approaches the target vehicle.
Further, determining the target state corresponding to each target object according to the moving track of each target object includes: judging whether the target object approaches to the target vehicle according to the moving track of each target object; if yes, marking the target object as a dangerous state; if not, the target object is marked as a safe state.
Determining a target state corresponding to each target object according to the moving track of each target object, namely dividing the targets filtered by the Kalman filter into two types, judging whether the target object approaches to the target vehicle according to the moving track of each target object, marking the target object as a dangerous state if the target object approaches to the target vehicle, rendering the target object by adopting red, marking the target object as a safe state if the target object approaches to the target vehicle or does not approach to the target vehicle, and rendering the target object by adopting green, wherein the target object is particularly represented by other colors.
Further, after the step of determining the target state corresponding to each target object according to the movement track of each target object, the method further includes: and sending an early warning information prompt to the target vehicle under the condition that the target state corresponding to the target object is detected to be a dangerous state.
After determining the target state corresponding to the target object according to the moving track of the target object, besides rendering and displaying the target spliced image in the vehicle to assist the driver, the warning information prompt can be sent to the target vehicle, for example, by broadcasting voice broadcasting such as that a pedestrian exists on the right side of the vehicle or setting an indicator lamp, the indicator lamp corresponding to the dangerous situation can be lightened, and the like, which is not applied for being limited specifically.
And step 107, rendering the first target spliced image according to the target state corresponding to each target object, and generating a second target spliced image.
It should be noted that, in step 1051, after determining the target state corresponding to the target object according to the movement track of the target object, the target stitched image may be rendered according to the target state, and the specific rendering mode may be changing the color of the target object or highlighting, which is not limited in this application.
And step 108, displaying the second target spliced image on an instrument panel of the target vehicle.
It should be noted that, in this embodiment of the present application, the target objects with different colors may appear on the second target stitched image in real time, and are displayed on the dashboard of the target vehicle, for example, the rendered image is displayed on the dashboard as a background image, an auxiliary reference effect is provided for the driver, the targets filtered by the kalman filter are divided into two types, the targets with danger are rendered by red, and the targets with danger are rendered by green, so that the dangerous targets can be more intuitively early-warned to the driver, and compared with the dashboard of the vehicle, the two areas are arranged in a partitioned manner for display, the application can improve the utilization rate of the dashboard of the vehicle, and the images displayed on the other area cannot be seen if the driver only sees one area in a partitioned manner.
Further, the method and the device can only display on the instrument panel of the vehicle when the dangerous target object is predicted to exist, namely, projection display is performed on the instrument panel of the vehicle when the target object in a dangerous state exists in the second target spliced image, so that the attention of a driver can be reduced, and the problem that visual fatigue of the driver is caused due to long-time display on the instrument panel and the safety of the personnel in the vehicle is affected can be solved.
According to the method, image stitching processing is carried out on the first target image and the second target image obtained according to different acquisition equipment of the target vehicle, the part, which is shielded by the vehicle frame, of the driver can be completely spliced on the basis that the camera is not added to the target vehicle, all view angles of the driver are restored, all target objects in the restored image of the vehicle are identified through processing of a preset target detection model on the target stitching image, the moving track of each target object is predicted by tracking the identified target objects through a prediction model, the target stitching image is rendered and displayed through the relation between the moving track of the target object and the target vehicle, the dead zone part of the vehicle frame and the target with danger can be visually observed by the driver through the rendered image, hidden danger reminding of the dead zone of the vehicle can be completed on the premise that any hardware is not added and original equipment is not changed through the method, the vehicle safety performance is improved, the vehicle cost is reduced, a driver or the vehicle is visually reminded by combining detection and tracking prediction, the driver with the potential danger is reminded, and the driver is reminded, and the safety of the driver can be warned in a greater degree by combining the method.
A third embodiment of the present application relates to a vehicle blind area detection method, referring to fig. 10, and fig. 10 is a flowchart of a vehicle blind area detection method provided by an embodiment of the present application, including the following steps:
and step 001, respectively acquiring a first target image and a second target image under the condition that a signal lamp exists in a third target image of the target vehicle.
It should be noted that, in combination with the foregoing description, in combination with multiple actual detection and actual data inspection, the influence of the vehicle frame on the driver generally occurs in a state that the vehicle speed is slower, especially in an intersection or zebra crossing where the vehicle is running, pedestrians and other vehicles or other target objects are more, at this time, the vehicle speed is slower and can cause the vehicle frame, for example, the vehicle a pillar shields the driver's sight for a long time, so in order to reduce the load bearing pressure calculated by the terminal and the server in the vehicle, the present application may set up to perform next image stitching, target object detection and tracking prediction under the condition that the signal lamp appears in front of the vehicle running, and therefore, under the condition that the signal lamp exists in the third target image of the target vehicle is detected, the first target image and the second target image are acquired respectively, and it should be noted that, consistent with step 101, the first target image may be an image acquired by the left camera of the vehicle, the second target image may be an image acquired by the right camera of the vehicle, and the third target image may be an image acquired by the front camera of the vehicle.
Step 101, a first target overlapping area and a second target overlapping area are respectively acquired, wherein the first target overlapping area is a target overlapping area of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping area is a target overlapping area of a second target image and a third target image corresponding to the target vehicle.
And 102, according to the first target overlapping area and the second target overlapping area, performing stitching processing on the first target image, the second target image and the third target image to generate a first target stitched image, wherein the first target stitched image is used for displaying a sight blind area of a driver of the target vehicle.
And step 103, displaying the first target spliced image at a preset position in the target vehicle.
Steps 101-103 are discussed with reference to the foregoing, and are not repeated here.
According to the embodiment of the application, under the condition that the signal lamp appears in front of the running of the vehicle is detected, the other cameras of the vehicle are used for acquiring pictures, so that all scenes of the driver shielded by the vehicle frame are subjected to next-step image stitching and restoration, the calculation pressure of the vehicle-mounted terminal and the server in the vehicle can be further reduced on the basis of guaranteeing the blind area view angle of the restored vehicle, and the calculation resources are saved.
It should be noted that all embodiments in this embodiment are only for facilitating the understanding of the technical solution in this embodiment by those skilled in the art, and are not limited to the structure in this embodiment.
A fourth embodiment of the present application relates to a vehicle blind area detection apparatus, as shown in fig. 11, fig. 11 is a block diagram of an apparatus for vehicle blind area detection according to an exemplary embodiment, the apparatus including:
an obtaining module 1101, configured to obtain a first target overlapping area and a second target overlapping area, where the first target overlapping area is a target overlapping area of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping area is a target overlapping area of a second target image and a third target image corresponding to the target vehicle;
the stitching module 1102 is configured to perform stitching on the first target image, the second target image, and the third target image according to the first target overlapping area and the second target overlapping area, so as to generate a first target stitched image, where the first target stitched image is used to display a sight blind area of a driver of the target vehicle;
And the display module 1103 is configured to display the first target stitched image at a preset position in the target vehicle.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
A fifth embodiment of the present application relates to an electronic device, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement any one of the vehicle blind zone detection methods described herein.
A sixth embodiment of the present application relates to a vehicle including the electronic device in the present application.
Fig. 12 is a block diagram illustrating an electronic device 1400, according to an example embodiment. Referring to fig. 12, an electronic device 1400 may include one or more of the following components: processing component 1402, memory 1404, power component 1406, multimedia component 1408, audio component 1410, input/output interface 1412, sensor component 1414, and communication component 1416.
The processing component 1402 generally controls overall operation of the device 1400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1402 may include one or more processors 1420 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1402 can include one or more modules that facilitate interaction between the processing component 1402 and other components. For example, the processing component 1402 can include a multimedia module to facilitate interaction between the multimedia component 1408 and the processing component 1402.
The memory 1404 is configured to store various types of data to support operations at the device 1400. Examples of such data include instructions for any application or method operating on the device 1400, contact data, phonebook data, messages, pictures, videos, and the like. The memory 1404 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 1406 provides power to the various components of the electronic device 1400. Power components 1406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 1400.
The multimedia component 1408 includes a screen between the electronic device 1400 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 1410 is configured to output and/or input audio signals. For example, the audio component 1410 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 1404 or transmitted via the communication component 1416. In some embodiments, audio component 1410 also includes a speaker for outputting audio signals.
The input/output interface 1412 provides an interface between the processing component 1402 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 1414 includes one or more sensors for providing status assessment of various aspects of the electronic device 1400. For example, the sensor assembly 1414 may detect an on/off state of the electronic device 1400, a relative positioning of components such as a display and keypad of the electronic device 1400, the sensor assembly 1414 may also detect a change in position of the electronic device 1400 or a component of the electronic device 1400, the presence or absence of a user's contact with the electronic device 1400, an orientation or acceleration/deceleration of the electronic device 1400, and a change in temperature of the electronic device 1400. The sensor assembly 1414 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 1414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1414 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1416 is configured to facilitate communication between the electronic device 1400 and other devices, either wired or wireless. The electronic device 1400 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 1416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 1416 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 1400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium is also provided, such as a memory 1404 including instructions executable by the processor 1420 of the electronic device 1400 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The invention is not limited to the precise construction which has been described above and shown in the drawings, and various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

1. A vehicle blind area detection method, characterized in that the method comprises:
respectively acquiring a first target overlapping region and a second target overlapping region, wherein the first target overlapping region is a target overlapping region of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping region is a target overlapping region of a second target image and a third target image corresponding to the target vehicle;
according to the first target overlapping area and the second target overlapping area, the first target image, the second target image and the third target image are subjected to splicing processing to generate a first target spliced image, wherein the first target spliced image is used for displaying a sight blind area of a driver of the target vehicle;
and displaying the first target spliced image at a preset position in the target vehicle.
2. The method of claim 1, wherein after the step of displaying the first target stitched image at a preset location within the target vehicle, the method further comprises:
identifying all target objects in the first target mosaic image based on a preset target detection model;
Predicting the moving track of each target object based on a preset prediction model;
determining a target state corresponding to each target object according to the moving track of each target object;
rendering the first target spliced image according to the target state corresponding to each target object to generate a second target spliced image;
and displaying the second target spliced image on an instrument panel of the target vehicle.
3. The method of claim 1, wherein prior to the step of separately acquiring the first target overlap region and the second target overlap region, the method further comprises:
and under the condition that the signal lamp exists in the third target image of the target vehicle, respectively acquiring the first target image and the second target image.
4. The method of claim 2, wherein determining the target state corresponding to each target object according to the movement track of each target object comprises:
judging whether the target object approaches to the target vehicle or not according to the moving track of each target object;
if yes, marking the target object as a dangerous state;
And if not, marking the target object as a safe state.
5. The method according to claim 2, wherein after the step of determining the target state corresponding to each of the target objects from the movement trajectory of each of the target objects, the method further comprises:
and sending an early warning information prompt to the target vehicle under the condition that the target state corresponding to the target object is detected to be a dangerous state.
6. The method of claim 1, wherein the stitching the first target image, the second target image, and the third target image according to the first target overlap region and the second target overlap region, the generating a first target stitched image comprising:
respectively acquiring a first overlapping region and a second overlapping region, wherein the first overlapping region is an overlapping region corresponding to the first target overlapping region in the first target image, and the second overlapping region is an overlapping region corresponding to the second target overlapping region in the second target image;
matching in the third target image according to the first overlapping region, when the difference value between each pixel position corresponding to the first overlapping region and each pixel position corresponding to the third target image is smaller than a preset threshold value, obtaining a first target matching position, and matching in the third target image according to the second overlapping region, and when the difference value between each pixel position corresponding to the second overlapping region and each pixel position corresponding to the third target image is smaller than a preset threshold value, obtaining a second target matching position;
And carrying out translation splicing processing on the first target image according to the first target matching position, and carrying out translation splicing processing on the second target image according to the second target matching position, so as to generate the first target spliced image.
7. The method of claim 2, wherein predicting the movement trajectory of each of the target objects based on a preset prediction model comprises:
acquiring the position coordinates of each target object according to the preset target detection model;
and inputting the position coordinates of each target object into the preset prediction model, and predicting the movement track of each target object.
8. A vehicle blind area detection apparatus, characterized by comprising:
the acquisition module is used for respectively acquiring a first target overlapping area and a second target overlapping area, wherein the first target overlapping area is a target overlapping area of a first target image and a third target image corresponding to a target vehicle, and the second target overlapping area is a target overlapping area of a second target image and a third target image corresponding to the target vehicle;
the splicing module is used for carrying out splicing processing on the first target image, the second target image and the third target image according to the first target overlapping area and the second target overlapping area to generate a first target spliced image, wherein the first target spliced image is used for displaying a sight blind area of a driver of the target vehicle;
And the display module is used for displaying the first target spliced image at a preset position in the target vehicle.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the vehicle blind zone detection method according to any one of claims 1 to 7.
10. A computer readable storage medium, which when executed by a processor of a mobile terminal, causes the mobile terminal to perform the vehicle blind zone detection method according to any one of claims 1 to 7.
11. A vehicle comprising the electronic device of claim 9.
CN202210856826.2A 2022-07-20 2022-07-20 Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle Pending CN117495754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210856826.2A CN117495754A (en) 2022-07-20 2022-07-20 Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210856826.2A CN117495754A (en) 2022-07-20 2022-07-20 Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN117495754A true CN117495754A (en) 2024-02-02

Family

ID=89674938

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210856826.2A Pending CN117495754A (en) 2022-07-20 2022-07-20 Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN117495754A (en)

Similar Documents

Publication Publication Date Title
US11308809B2 (en) Collision control method and apparatus, and storage medium
WO2020024457A1 (en) Liability cognizance method and device of traffic accident and computer readable storage medium
US20210192239A1 (en) Method for recognizing indication information of an indicator light, electronic apparatus and storage medium
US11833966B2 (en) Switchable display during parking maneuvers
US11288531B2 (en) Image processing method and apparatus, electronic device, and storage medium
CN108171225B (en) Lane detection method, device, terminal and storage medium
EP3309711A1 (en) Vehicle alert apparatus and operating method thereof
CN111104920B (en) Video processing method and device, electronic equipment and storage medium
CN109671090A (en) Image processing method, device, equipment and storage medium based on far infrared
KR20210058931A (en) Signal indicator state detection method and device, operation control method and device
CN114419572B (en) Multi-radar target detection method and device, electronic equipment and storage medium
CN114764911A (en) Obstacle information detection method, obstacle information detection device, electronic device, and storage medium
CN109484304B (en) Rearview mirror adjusting method and device, computer readable storage medium and rearview mirror
US20230326216A1 (en) Object detection method and apparatus for vehicle, device, vehicle and medium
CN117495754A (en) Vehicle blind area detection method and device, electronic equipment, storage medium and vehicle
CN115825979A (en) Environment sensing method and device, electronic equipment, storage medium and vehicle
CN114627443B (en) Target detection method, target detection device, storage medium, electronic equipment and vehicle
CN116206363A (en) Behavior recognition method, apparatus, device, storage medium, and program product
CN113505674B (en) Face image processing method and device, electronic equipment and storage medium
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN115965935A (en) Object detection method, device, electronic apparatus, storage medium, and program product
CN115995142A (en) Driving training reminding method based on wearable device and wearable device
CN112199987A (en) Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment
CN114842457B (en) Model training and feature extraction method and device, electronic equipment and medium
CN115082473B (en) Dirt detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination