CN117373248A - Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform - Google Patents

Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform Download PDF

Info

Publication number
CN117373248A
CN117373248A CN202311445646.6A CN202311445646A CN117373248A CN 117373248 A CN117373248 A CN 117373248A CN 202311445646 A CN202311445646 A CN 202311445646A CN 117373248 A CN117373248 A CN 117373248A
Authority
CN
China
Prior art keywords
vehicle
road condition
mounted road
blind area
warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311445646.6A
Other languages
Chinese (zh)
Other versions
CN117373248B (en
Inventor
胡坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huixin Video Electronics Co ltd
Original Assignee
Shenzhen Huixin Video Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huixin Video Electronics Co ltd filed Critical Shenzhen Huixin Video Electronics Co ltd
Priority to CN202311445646.6A priority Critical patent/CN117373248B/en
Publication of CN117373248A publication Critical patent/CN117373248A/en
Application granted granted Critical
Publication of CN117373248B publication Critical patent/CN117373248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications

Landscapes

  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

According to the intelligent early warning method, system and cloud platform for the automobile blind area based on image recognition, the vehicle-mounted road condition object space conversion matrix can be determined based on the vehicle-mounted road condition object clusters, further, object space position features between two random target vehicle-mounted road condition objects can be analyzed based on the vehicle-mounted road condition object space conversion matrix, in addition, object space position features between the vehicle-mounted road condition object clusters can be analyzed, further, object space position features between a plurality of target vehicle-mounted road condition objects can be analyzed, and therefore the complete and accurate object space position features between each target vehicle-mounted road condition object can be obtained, so that blind area information of a target vehicle in a running process can be accurately determined based on the object space position features, and accuracy and reliability of early warning processing for the blind area information are improved.

Description

Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform
Technical Field
The invention relates to the technical field of image processing, in particular to an intelligent early warning method and system for a dead zone of an automobile and a cloud platform based on image recognition.
Background
The blind area of the automobile is the area where the driver is positioned at the normal driver seat and the sight of the driver is blocked by the automobile body and cannot be directly observed. The automobile blind area mainly has four large visual blind areas (front blind area, rear blind area, rearview mirror blind area and AB column blind area) and some artificial blind areas. Because traffic accidents caused by the dead zone of the automobile are not counted, the life and property safety of people is greatly threatened, and therefore, how to accurately realize the early warning treatment of the dead zone of the automobile is a key problem at present. .
Disclosure of Invention
In order to improve the technical problems in the related art, the invention provides an intelligent early warning method, system and cloud platform for an automobile blind area based on image recognition.
In a first aspect, an embodiment of the present invention provides an intelligent early warning method for a dead zone of an automobile based on image recognition, which is applied to an intelligent early warning cloud platform for the dead zone of the automobile, and the method includes: receiving a to-be-analyzed vehicle-mounted scene image stream uploaded by a target vehicle in a running process, and determining a plurality of target vehicle-mounted road condition objects in the to-be-analyzed vehicle-mounted scene image stream; determining at least one vehicle-mounted road condition object cluster through the plurality of target vehicle-mounted road condition objects, wherein each vehicle-mounted road condition object cluster comprises at least two target vehicle-mounted road condition objects; determining a vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster; the vehicle-mounted road condition object space conversion matrix is used for reflecting the relative space positions among vehicle-mounted road condition object clusters which comprise the same target vehicle-mounted road condition objects in each vehicle-mounted road condition object cluster; analyzing object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed through the vehicle-mounted road condition object space conversion matrix; and determining blind area information of the target vehicle in the driving process based on the object space position characteristics.
In this way, after the vehicle-mounted scene image stream to be analyzed is obtained, a plurality of target vehicle-mounted road condition objects can be determined in the vehicle-mounted scene image stream to be analyzed, and at least one vehicle-mounted road condition object cluster is determined based on the plurality of target vehicle-mounted road condition objects. Further, a vehicle-mounted road condition object space conversion matrix can be determined based on the at least one vehicle-mounted road condition object cluster, and object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed are analyzed based on the vehicle-mounted road condition object space conversion matrix. In this way, the vehicle-mounted road condition object space conversion matrix can be determined based on the vehicle-mounted road condition object clusters, and then the object space position features between two random target vehicle-mounted road condition objects can be analyzed based on the vehicle-mounted road condition object space conversion matrix, in addition, the object space position features between the vehicle-mounted road condition object clusters can be analyzed, and then the object space position features between a plurality of target vehicle-mounted road condition objects can be analyzed, so that the object space position features between the complete and accurate target vehicle-mounted road condition objects can be obtained, the blind area information of the target vehicle in the running process can be accurately determined based on the object space position features, and the accuracy and the reliability of the early warning processing of the blind area information can be improved.
For some possible design ideas, the determining the vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster includes: on the basis that the number of the at least one vehicle-mounted road condition object cluster is a plurality of, determining road condition object binary group units based on the plurality of vehicle-mounted road condition object clusters to obtain a plurality of road condition object binary group units, wherein each vehicle-mounted road condition object cluster corresponds to one road condition object binary group unit; generating the relative spatial position of the object unit between two random road condition object binary group units to obtain the spatial position characteristic distribution of the basic object; and acquiring a space collision prediction variable between the two-tuple units of each road condition object, and cleaning out the relative space positions of the object units which do not accord with the space collision prediction variable in the basic object space position characteristic distribution to obtain the vehicle-mounted road condition object space conversion matrix.
Therefore, on the basis of the fact that the number of at least one vehicle-mounted road condition object cluster is a plurality of, road condition object binary group units are determined on the basis of the plurality of vehicle-mounted road condition object clusters, a plurality of road condition object binary group units are obtained, and the basic object space position feature distribution is obtained by generating the relative space positions of object units between two road condition object binary group units at random, so that the signal-to-noise ratio of the obtained basic object space position feature distribution is high, and the space position positioning accuracy can be improved during object space position feature analysis. In addition, the thought of the vehicle-mounted road condition object space conversion matrix can be obtained by acquiring the space collision prediction variable between the two-tuple units of each road condition object and cleaning the relative space position of the object units which do not accord with the space collision prediction variable in the basic object space position feature distribution, so that timeliness of analyzing object space position features between each target vehicle-mounted road condition object based on the vehicle-mounted road condition object space conversion matrix can be improved, and the space position positioning efficiency is ensured.
For some possible design ideas, the analyzing, by using the vehicle-mounted road condition object space transformation matrix, object space position features between each target vehicle-mounted road condition object in the vehicle-mounted scene image stream to be analyzed includes: acquiring a spatial position characteristic analysis network of an object; loading the vehicle-mounted road condition object space conversion matrix into the object space position feature analysis network for analysis processing to obtain object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed, wherein the object space position features comprise: and the first object space position features among the target vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster and/or the second object space position features among the vehicle-mounted road condition object clusters are/is obtained, and the object space position features among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed are obtained.
Therefore, the object space position feature analysis network can be acquired first, and the thought of the object space position feature among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be obtained by loading the vehicle-mounted road condition object space conversion matrix into the object space position feature analysis network for analysis, so that the intelligent analysis of the object space position feature in the vehicle-mounted scene image stream to be analyzed can be realized. In addition, the object space position characteristics among the target vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster and the object space position characteristics among the vehicle-mounted road condition object clusters can be obtained through analysis, so that the object space position characteristics in the vehicle-mounted scene image stream to be analyzed can be obtained completely and accurately, and the reliability of the intelligent early warning method for the dead zone of the vehicle based on image recognition is improved.
For some possible design considerations, the method further comprises: acquiring a streaming sample cluster of a vehicle-mounted scene image; the vehicle-mounted scene image flow sample clusters comprise a plurality of vehicle-mounted scene image flow samples and network debugging prior basis, each vehicle-mounted scene image flow sample comprises spatial position feature distribution of an object to be debugged, and the network debugging prior basis of each vehicle-mounted scene image flow sample comprises spatial position keywords corresponding to the spatial position feature of the object of the vehicle-mounted road condition object included in each vehicle-mounted road condition object cluster to be debugged in the spatial position feature distribution of the object to be debugged and/or spatial position keywords of the spatial position feature of the object between the vehicle-mounted road condition object clusters; and debugging the spatial position feature analysis network of the object to be debugged through the vehicle-mounted scene image stream sample cluster to obtain the spatial position feature analysis network of the object.
For some possible design ideas, the debugging the object space position feature analysis network to be debugged through the vehicle-mounted scene image stream sample cluster to obtain the object space position feature analysis network comprises the following steps: loading the vehicle-mounted scene image stream sample cluster into the spatial position characteristic analysis network of the object to be debugged for debugging, and obtaining a basic debugging report of each vehicle-mounted scene image stream sample; the basic debugging report is used for reflecting regression analysis data of object space position characteristics of the vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster to be debugged in the spatial position characteristic distribution of the objects to be debugged and/or regression analysis data of object space position characteristics among the vehicle-mounted road condition object clusters; determining a target debugging report in the basic debugging report, wherein the target debugging report is a debugging report which is different from a corresponding network debugging prior basis; optimizing bias coefficients of the cross entropy training cost through the target debugging report, and determining training cost indexes of the cross entropy training cost based on the bias coefficients after optimization; and optimizing network variables of the spatial position feature analysis network of the object to be debugged through the training cost index of the cross entropy training cost until the spatial position feature analysis network of the object to be debugged, which meets the debugging requirements, is obtained, and determining the spatial position feature analysis network of the object to be debugged, which meets the debugging requirements, as the spatial position feature analysis network of the object.
Therefore, the spatial position feature analysis network of the object to be debugged can be debugged based on the vehicle-mounted scene image stream sample cluster, and the bias coefficient of the cross entropy training cost can be optimized based on the target debugging report, so that the spatial position feature analysis network of the object to be debugged can pay more attention to the vehicle-mounted scene image stream sample with analysis errors, regression analysis deviation of the spatial position feature analysis network of the object to be debugged can be improved, and analysis accuracy of the obtained spatial position feature analysis network of the object can be improved.
For some possible design ideas, the determining at least one vehicle-mounted road condition object cluster by the plurality of target vehicle-mounted road condition objects includes: adding a corresponding vehicle-mounted road condition object category label for each target vehicle-mounted road condition object in the plurality of target vehicle-mounted road condition objects to obtain a plurality of vehicle-mounted road condition object category labels; determining the at least one vehicle-mounted road condition object cluster through the plurality of vehicle-mounted road condition object class labels, wherein each vehicle-mounted road condition object cluster comprises a vehicle-mounted road condition object class label of a target vehicle-mounted road condition object corresponding to the vehicle-mounted road condition object cluster.
For some possible design ideas, adding a corresponding vehicle-mounted road condition object category tag for each of the plurality of target vehicle-mounted road condition objects includes: acquiring an object outline size image of each target vehicle-mounted road condition object in the plurality of target vehicle-mounted road condition objects; and carrying out image feature extraction on the object outline size image to obtain image feature extraction information, and determining a vehicle-mounted road condition object type label corresponding to the target vehicle-mounted road condition object through the image feature extraction information.
Therefore, after the object outline size image of each target vehicle-mounted road condition object is obtained, image feature extraction is carried out on the object outline size image to obtain image feature extraction information, and the vehicle-mounted road condition object type label of each target vehicle-mounted road condition object is determined based on the image feature extraction information, so that a plurality of target vehicle-mounted road condition objects in a vehicle-mounted scene image stream to be analyzed can be adjusted into a mode of the vehicle-mounted road condition object type label which can be analyzed by an object space position feature analysis network, and further, the object space position feature among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be better analyzed by the object space position feature analysis network. And then determining the thought of at least one vehicle-mounted road condition object cluster based on a plurality of vehicle-mounted road condition object class labels corresponding to a plurality of target vehicle-mounted road condition objects, so that the timeliness of analyzing the object space position characteristics among the target vehicle-mounted road condition objects by the object space position characteristic analysis network can be improved, and the timeliness of analyzing the object space position characteristics is further improved.
For some possible design ideas, the determining the vehicle-mounted road condition object category label corresponding to the target vehicle-mounted road condition object according to the image feature extraction information includes: on the basis of a plurality of image feature extraction information, determining weighting results of a plurality of image feature extraction information to obtain target image feature extraction information; and carrying out feature operation on the target image feature extraction information and the number of the image feature extraction information to obtain a vehicle-mounted road condition object category label of the target vehicle-mounted road condition object.
Therefore, on the basis that the number of the image feature extraction information aiming at each target vehicle-mounted road condition object is a plurality of, the thought of weighted average of the plurality of image feature extraction information can be carried out to obtain a unique vehicle-mounted road condition object type label of each target vehicle-mounted road condition object, so that the error of an intelligent early warning method of the vehicle blind zone based on image recognition can be reduced, the accuracy of spatial position positioning is improved, and the accuracy of the intelligent early warning method of the vehicle blind zone based on image recognition can be improved.
For some possible design ideas, the determining a plurality of target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed includes: determining detection frame size information of a video image detection frame; the video image detection frame is used for capturing and detecting objects in the vehicle-mounted scene image stream to be analyzed; determining a target vehicle-mounted scene image in the video image detection frame in the vehicle-mounted scene image stream to be analyzed at the current time node, and analyzing a vehicle-mounted road condition object capturing prompt in the target vehicle-mounted scene image; analyzing an object outline size image corresponding to the vehicle-mounted road condition object in the target vehicle-mounted scene image through the vehicle-mounted road condition object capturing prompt to obtain at least one basic vehicle-mounted road condition object, and determining the target vehicle-mounted road condition object through the at least one basic vehicle-mounted road condition object.
Therefore, the video image detection frame is used for traversing the vehicle-mounted scene image stream to be analyzed to obtain a plurality of target vehicle-mounted road condition objects, the video image detection frame is used for disassembling the vehicle-mounted scene image stream to be analyzed on the basis that the vehicle-mounted scene image stream to be analyzed is longer, and the target vehicle-mounted road condition objects can be analyzed, so that the efficiency of determining the plurality of target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed is improved, and the timeliness of object space position feature analysis is improved. In addition, the vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be accurately captured through the vehicle-mounted road condition object capturing prompt, so that the accuracy of determining the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be improved, and the accuracy of analyzing the spatial position features of the objects can be further improved.
For some possible design considerations, the determining the target vehicle-mounted road condition object by the at least one base vehicle-mounted road condition object includes: and on the basis that the number of the vehicle-mounted road condition objects of the at least one basic vehicle-mounted road condition object is a plurality of, performing vehicle-mounted road condition object checking processing on the plurality of basic vehicle-mounted road condition objects, and determining the target vehicle-mounted road condition object based on the basic vehicle-mounted road condition objects after the checking processing.
Therefore, the correction processing can be carried out on the plurality of basic vehicle-mounted road condition objects on the basis of the plurality of vehicle-mounted road condition objects, so that misguidance in the process of analyzing the spatial position characteristics of the objects is reduced, and the accuracy of analyzing the spatial position characteristics of the objects is improved.
For some possible design considerations, the method further comprises: acquiring a blind area collision early-warning event set aiming at the blind area information, wherein the blind area collision early-warning event set comprises at least two blind area collision early-warning events; acquiring linkage coefficients between each blind area collision early-warning event in the blind area collision early-warning event set and the blind area information; according to the linkage coefficient corresponding to each blind area collision early-warning event and the trend prediction vector of each blind area collision early-warning event, the blind area collision early-warning events are arranged to obtain a corresponding blind area collision early-warning event sequence; constructing a target early warning prompt sequence related to the blind area information based on the blind area collision early warning event sequence, wherein the target early warning prompt sequence comprises at least two target early warning prompt levels;
The method comprises the steps of sorting all the blind area collision early-warning events according to the linkage coefficient corresponding to all the blind area collision early-warning events and the trend prediction vector of all the blind area collision early-warning events to obtain corresponding blind area collision early-warning event sequences, and comprises the following steps: disassembling all the blind area collision early-warning events according to the linkage coefficient corresponding to all the blind area collision early-warning events and the trend prediction vector of all the blind area collision early-warning events to obtain at least two blind area collision early-warning event subsets; sorting all the blind area collision early-warning event subsets, and sorting all the blind area collision early-warning events in all the blind area collision early-warning event subsets respectively to obtain the blind area collision early-warning event sequence;
the method includes the steps of dismantling each blind area collision early-warning event according to the linkage coefficient corresponding to each blind area collision early-warning event and the trend prediction vector of each blind area collision early-warning event to obtain at least two blind area collision early-warning event subsets, and comprises the following steps: weighting trend prediction vectors of all the blind area collision early-warning events according to the linkage coefficients corresponding to all the blind area collision early-warning events respectively to obtain collision trend prediction vectors of all the blind area collision early-warning events; grouping the blind area collision early-warning events according to the collision trend prediction vector of the blind area collision early-warning events to obtain at least two blind area collision early-warning event subsets;
The method comprises the steps of sorting all blind area collision early-warning event subsets, sorting all blind area collision early-warning events in all blind area collision early-warning event subsets respectively to obtain blind area collision early-warning event sequences, and specifically comprises the following steps: according to the number of blind area collision early-warning events contained in each blind area collision early-warning event subset, sorting the blind area collision early-warning event subsets; and for each blind area collision early warning event subset, respectively executing the following operations: according to the correlation weights of the trend predictive vectors of all the blind area collision early-warning events in the blind area collision early-warning event subset and the blind area collision early-warning event subset, sorting all the blind area collision early-warning events in the blind area collision early-warning event subset; and generating the blind area collision early-warning event sequence based on the arrangement results among the blind area collision early-warning event subsets and the arrangement results of the blind area collision early-warning events in the blind area collision early-warning event subsets.
In a second aspect, the invention also provides an intelligent early warning system for the dead zone of the automobile based on image recognition, which comprises an intelligent early warning cloud platform for the dead zone of the automobile and a target vehicle which are communicated with each other; the intelligent early warning cloud platform for the automobile blind area is used for: receiving a to-be-analyzed vehicle-mounted scene image stream uploaded by a target vehicle in a running process, and determining a plurality of target vehicle-mounted road condition objects in the to-be-analyzed vehicle-mounted scene image stream; determining at least one vehicle-mounted road condition object cluster through the plurality of target vehicle-mounted road condition objects, wherein each vehicle-mounted road condition object cluster comprises at least two target vehicle-mounted road condition objects; determining a vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster; the vehicle-mounted road condition object space conversion matrix is used for reflecting the relative space positions among vehicle-mounted road condition object clusters which comprise the same target vehicle-mounted road condition objects in each vehicle-mounted road condition object cluster; analyzing object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed through the vehicle-mounted road condition object space conversion matrix; and determining blind area information of the target vehicle in the driving process based on the object space position characteristics.
In a third aspect, the invention also provides an intelligent early warning cloud platform for the dead zone of the automobile, which comprises a processor and a memory; the processor is in communication with the memory, and the processor is configured to read and execute a computer program from the memory to implement the method described above.
In a fourth aspect, the present invention also provides a computer readable storage medium having stored thereon a program which when executed by a processor implements the method described above.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flow chart of an intelligent early warning method for a dead zone of an automobile based on image recognition according to an embodiment of the invention.
Fig. 2 is a schematic diagram of a communication architecture of an intelligent early warning system for a dead zone of an automobile based on image recognition according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method provided by the embodiment of the invention can be executed in an intelligent early warning cloud platform for the dead zone of the automobile, computer equipment or similar computing devices. Taking the example of running on the automobile blind area intelligent warning cloud platform, the automobile blind area intelligent warning cloud platform 10 may include one or more processors 102 (the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, or the like), and a memory 104 for storing data, and optionally, the above automobile blind area intelligent warning cloud platform may further include a transmission device 106 for a communication function. It will be appreciated by those skilled in the art that the above structure is merely illustrative, and the structure of the intelligent early warning cloud platform for the dead zone of the automobile is not limited. For example, the car blind spot intelligent warning cloud platform 10 may also include more or fewer components than those shown above, or have a different configuration than those shown above.
The memory 104 may be used to store a computer program, for example, a software program of an application software and a module, for example, a computer program corresponding to an intelligent early warning method for a dead zone of an automobile based on image recognition in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, implements the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the car blind spot intelligent warning cloud platform 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 106 is arranged to receive or transmit data via a network. The specific example of the network may include a wireless network provided by a communication provider of the car blind area intelligent warning cloud platform 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Based on this, referring to fig. 1, fig. 1 is a schematic flow chart of an intelligent early warning method for a dead zone of an automobile, which is provided by the embodiment of the invention, and the method is applied to an intelligent early warning cloud platform for the dead zone of the automobile, and further can comprise the following technical scheme.
S1, receiving an image stream of a vehicle-mounted scene to be analyzed, which is uploaded by a target vehicle in a running process, and determining a plurality of target vehicle-mounted road condition objects in the image stream of the vehicle-mounted scene to be analyzed.
In the embodiment of the invention, the target vehicle and the intelligent early warning cloud platform for the dead zone of the automobile can be based on the communication of the Internet of vehicles, and the image flow of the vehicle-mounted scene to be analyzed can be uploaded to the intelligent early warning cloud platform for the dead zone of the automobile in real time when the target vehicle runs on a road, the image flow of the vehicle-mounted scene to be analyzed can be a video flow, and the target vehicle-mounted road condition object can be various objects which are positioned near the target vehicle, around the target vehicle or have a certain distance from the target vehicle in the image flow of the vehicle-mounted scene to be analyzed, including passers-by, pets, vehicles, buildings, landmark equipment and the like.
S2, determining at least one vehicle-mounted road condition object cluster through the plurality of target vehicle-mounted road condition objects.
Each vehicle-mounted road condition object cluster comprises at least two target vehicle-mounted road condition objects.
S3, determining a vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster.
In the embodiment of the invention, the vehicle-mounted road condition object space conversion matrix is used for reflecting the relative space positions among vehicle-mounted road condition object clusters including the same target vehicle-mounted road condition objects in each vehicle-mounted road condition object cluster. The relative spatial position can be understood as a four-dimensional spatial position, i.e. three dimensions corresponding to a three-dimensional position based on the world coordinate system, plus a time dimension during the travel of the vehicle. In addition, the vehicle-mounted road condition object space conversion matrix can be recorded in a topological relation network mode.
S4, analyzing object space position features among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed through the vehicle-mounted road condition object space conversion matrix.
In the embodiment of the invention, the object space position features reflect the real-time space position relationship between different target vehicle-mounted road condition objects and the real-time space position relationship with the target vehicle, the real-time space position relationship changes along with the change of time, and the blind area information of the target vehicle in the running process can be determined through the real-time space position relationship.
S5, determining blind area information of the target vehicle in the running process based on the object space position features.
For example, the blind area information of the target vehicle in the driving process can be analyzed in real time according to the spatial position characteristics of the object among pedestrians, buildings and the target vehicle. For another example, the dead zone of the probe of the target vehicle is analyzed in real time according to the spatial position characteristics of the object between the other vehicles and the target vehicle.
It can be understood that, when the method is applied to S1-S5, after the vehicle-mounted scene image stream to be analyzed is obtained, a plurality of target vehicle-mounted road condition objects can be determined in the vehicle-mounted scene image stream to be analyzed, and at least one vehicle-mounted road condition object cluster can be determined based on the plurality of target vehicle-mounted road condition objects. Further, a vehicle-mounted road condition object space conversion matrix can be determined based on the at least one vehicle-mounted road condition object cluster, and object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed are analyzed based on the vehicle-mounted road condition object space conversion matrix. In this way, the vehicle-mounted road condition object space conversion matrix can be determined based on the vehicle-mounted road condition object clusters, and then the object space position features between two random target vehicle-mounted road condition objects can be analyzed based on the vehicle-mounted road condition object space conversion matrix, in addition, the object space position features between the vehicle-mounted road condition object clusters can be analyzed, and then the object space position features between a plurality of target vehicle-mounted road condition objects can be analyzed, so that the object space position features between the complete and accurate target vehicle-mounted road condition objects can be obtained, the blind area information of the target vehicle in the running process can be accurately determined based on the object space position features, and the accuracy and the reliability of the early warning processing of the blind area information can be improved.
For some possible design ideas, the determining the vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster described in S3 may include the technical solutions described in S31-S33.
S31, on the basis that the number of the at least one vehicle-mounted road condition object cluster is a plurality of, determining road condition object binary unit based on the plurality of vehicle-mounted road condition object clusters to obtain a plurality of road condition object binary unit, wherein each vehicle-mounted road condition object cluster corresponds to one road condition object binary unit.
The road condition object binary group unit can be road condition object binary group members simplified based on node connection rules.
S32, generating relative spatial positions of object units between two random road condition object binary group units, and obtaining basic object spatial position feature distribution.
The relative spatial position of the object unit can be reflected through a connecting line between nodes, and the basic spatial position characteristic distribution of the object can be understood as an initial spatial position characteristic distribution of the object.
S33, acquiring a space collision prediction variable between the two-tuple units of each road condition object, and cleaning out the relative space positions of the object units which do not accord with the space collision prediction variable in the basic object space position characteristic distribution to obtain the vehicle-mounted road condition object space conversion matrix.
In the embodiment of the invention, the spatial collision prediction variable can be determined based on the node position and the node connection line of the road condition object two-tuple unit, and the spatial collision prediction variable can restrict the position relationship between the road condition object two-tuple unit, so that the analysis processing of the collision trend is realized, and therefore, the spatial collision prediction variable can be further understood as a constraint condition.
The method is applied to S31-S33, road condition object binary group units can be determined based on the number of at least one vehicle road condition object cluster based on the number of the vehicle road condition object clusters, the road condition object binary group units are obtained, and the basic object space position feature distribution is obtained by generating the relative space positions of the object units between the two road condition object binary group units, so that the signal-to-noise ratio of the obtained basic object space position feature distribution is higher, and the space position positioning precision can be improved during object space position feature analysis. In addition, the thought of the vehicle-mounted road condition object space conversion matrix can be obtained by acquiring the space collision prediction variable between the two-tuple units of each road condition object and cleaning the relative space position of the object units which do not accord with the space collision prediction variable in the basic object space position feature distribution, so that timeliness of analyzing object space position features between each target vehicle-mounted road condition object based on the vehicle-mounted road condition object space conversion matrix can be improved, and the space position positioning efficiency is ensured.
For some possible design ideas, the analyzing the object space position features between the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed through the vehicle-mounted road condition object space conversion matrix described in S4 may include the technical scheme described in S40.
S40, acquiring a spatial position characteristic analysis network of the object; and loading the vehicle-mounted road condition object space conversion matrix into the object space position feature analysis network to perform analysis processing to obtain object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed.
Wherein the object spatial location feature comprises: and the first object space position features among the target vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster and/or the second object space position features among the vehicle-mounted road condition object clusters are/is obtained, and the object space position features among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed are obtained.
In the embodiment of the present invention, the object spatial location feature analysis network may be a convolutional neural network model or a deep learning network, which is not limited herein. Therefore, the object space position feature analysis network can be acquired first, and the thought of the object space position feature among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be obtained by loading the vehicle-mounted road condition object space conversion matrix into the object space position feature analysis network for analysis, so that the intelligent analysis of the object space position feature in the vehicle-mounted scene image stream to be analyzed can be realized. In addition, the object space position characteristics among the target vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster and the object space position characteristics among the vehicle-mounted road condition object clusters can be obtained through analysis, so that the object space position characteristics in the vehicle-mounted scene image stream to be analyzed can be obtained completely and accurately, and the reliability of the intelligent early warning method for the dead zone of the vehicle based on image recognition is improved.
For some possible design ideas, the method further aims at the debugging step of the object space position feature analysis network, and the debugging step can comprise the technical schemes described in the following steps 100 and 200.
And 100, acquiring a vehicle-mounted scene image stream sample cluster.
The vehicle-mounted scene image flow sample clusters comprise a plurality of vehicle-mounted scene image flow samples (vehicle-mounted scene image flow samples) and network debugging prior bases (sample label information), each vehicle-mounted scene image flow sample comprises spatial position feature distribution of an object to be debugged, and the network debugging prior bases of each vehicle-mounted scene image flow sample comprise spatial position keywords (spatial position labels/marks) corresponding to the spatial position features of the object of the vehicle-mounted road condition object included in each vehicle-mounted road condition object cluster in the spatial position feature distribution of the object to be debugged and/or spatial position keywords corresponding to the spatial position features of the object between the vehicle-mounted road condition object clusters.
And 200, debugging the spatial position feature analysis network of the object to be debugged through the vehicle-mounted scene image stream sample cluster to obtain the spatial position feature analysis network of the object.
Based on the above, the debugging of the object space position feature analysis network to be debugged through the vehicle-mounted scene image stream sample cluster described in step 200 to obtain the object space position feature analysis network may include the technical solutions described in steps 210-240.
Step 210, loading the vehicle-mounted scene image stream sample cluster into the spatial position feature analysis network of the object to be debugged for debugging, and obtaining a basic debugging report of each vehicle-mounted scene image stream sample.
The basic debugging report (initial training result) is used for reflecting regression analysis data (prediction information) of the object space position characteristics of the vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster in the to-be-debugged object space position characteristic distribution and/or regression analysis data of the object space position characteristics among the vehicle-mounted road condition object clusters.
Step 220, determining a target debug report in the basic debug report.
The target debugging report is a debugging report which is different from the corresponding network debugging prior basis.
Step 230, optimizing bias coefficients of the cross entropy training cost through the target debugging report, and determining a training cost index (loss value) of the cross entropy training cost (cross entropy loss function) based on the bias coefficients (confidence weights) after optimization.
Step 240, optimizing the network variables of the spatial position feature analysis network of the object to be debugged through the training cost index of the cross entropy training cost until the spatial position feature analysis network of the object to be debugged, which meets the debugging requirements, is obtained, and determining the spatial position feature analysis network of the object to be debugged, which meets the debugging requirements, as the spatial position feature analysis network of the object.
The method is applied to steps 210-240, the spatial position feature analysis network of the object to be debugged can be debugged based on the on-vehicle scene image stream sample cluster, and the bias coefficient of the cross entropy training cost can be optimized based on the target debugging report, so that the spatial position feature analysis network of the object to be debugged can pay more attention to the on-vehicle scene image stream sample with analysis errors, regression analysis deviation of the spatial position feature analysis network of the object to be debugged can be improved, and analysis accuracy of the obtained spatial position feature analysis network of the object can be improved.
For some possible design ideas, the determining at least one vehicle-mounted road condition object cluster by the plurality of target vehicle-mounted road condition objects described in S2 may include the technical solutions described in S21 and S22.
S21, adding a corresponding vehicle-mounted road condition object type label for each target vehicle-mounted road condition object in the plurality of target vehicle-mounted road condition objects to obtain a plurality of vehicle-mounted road condition object type labels.
S22, determining at least one vehicle-mounted road condition object cluster through the plurality of vehicle-mounted road condition object class labels, wherein each vehicle-mounted road condition object cluster comprises a vehicle-mounted road condition object class label of a target vehicle-mounted road condition object corresponding to the vehicle-mounted road condition object cluster.
By the design, different vehicle-mounted road condition object clusters can be accurately distinguished based on the vehicle-mounted road condition object class labels.
For some possible design ideas, the adding, for each of the plurality of target vehicle-mounted road condition objects, a corresponding vehicle-mounted road condition object category tag described in S21 may include the technical solutions described in S211 and S212.
S211, acquiring an object outline size image of each target vehicle-mounted road condition object in the plurality of target vehicle-mounted road condition objects.
S212, extracting image features of the object outline size image to obtain image feature extraction information, and determining a vehicle-mounted road condition object type label corresponding to the target vehicle-mounted road condition object through the image feature extraction information.
For example, the image feature extraction information may be an image encoding feature. The method is applied to S211 and S212, after the object outline size image of each target vehicle-mounted road condition object is obtained, image feature extraction information is obtained by carrying out image feature extraction on the object outline size image, and the vehicle-mounted road condition object type label of each target vehicle-mounted road condition object is determined based on the image feature extraction information, so that a plurality of target vehicle-mounted road condition objects in a vehicle-mounted scene image stream to be analyzed can be adjusted into a mode of the vehicle-mounted road condition object type label which can be analyzed by an object space position feature analysis network, and further, object space position features among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be better analyzed by the object space position feature analysis network. And determining the thought of at least one vehicle-mounted road condition object cluster based on a plurality of vehicle-mounted road condition object class labels corresponding to a plurality of target vehicle-mounted road condition objects, so that the timeliness of analyzing the object space position characteristics among the target vehicle-mounted road condition objects by the object space position characteristic analysis network can be reduced, and the timeliness of analyzing the object space position characteristics is further improved.
For some possible design ideas, the vehicle-mounted road condition object category label described in S212, which determines the corresponding target vehicle-mounted road condition object according to the image feature extraction information, may include the technical solutions described in S2121 and S2122.
S2121, determining weighted results of a plurality of image feature extraction information on the basis that the image feature extraction information is a plurality of image feature extraction information, and obtaining target image feature extraction information.
Wherein the weighted result may be the sum of the image feature vectors.
S2122, carrying out feature operation on the target image feature extraction information and the number of the image feature extraction information to obtain a vehicle-mounted road condition object type label of the target vehicle-mounted road condition object.
Further, the feature operation can be understood as an averaging process, so that the method is applied to S2121 and S2122, and based on the fact that the number of image feature extraction information of each target vehicle-mounted road condition object is several, the thought of weighted averaging of the several image feature extraction information can be carried out to obtain a unique vehicle-mounted road condition object type label of each target vehicle-mounted road condition object, and therefore errors of an intelligent vehicle blind zone early warning method based on image recognition can be reduced, accuracy of spatial position positioning is improved, and accuracy of the intelligent vehicle blind zone early warning method based on image recognition can be improved.
For some possible design ideas, determining a plurality of target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed, which is described in S1, may include the technical schemes described in S11-S13.
S11, determining detection frame size information of a video image detection frame; the video image detection frame is a frame for capturing and detecting objects in the vehicle-mounted scene image stream to be analyzed.
S12, determining a target vehicle-mounted scene image in the video image detection frame in the vehicle-mounted scene image stream to be analyzed at the current time node, and analyzing a vehicle-mounted road condition object capturing prompt in the target vehicle-mounted scene image.
S13, analyzing an object outline size image of the corresponding vehicle-mounted road condition object in the target vehicle-mounted scene image through the vehicle-mounted road condition object capturing prompt to obtain at least one basic vehicle-mounted road condition object, and determining the target vehicle-mounted road condition object through the at least one basic vehicle-mounted road condition object.
The method is applied to S11-S13, the video image detection frame is used for traversing the vehicle-mounted scene image stream to be analyzed to obtain a plurality of target vehicle-mounted road condition objects, the video image detection frame is used for disassembling the vehicle-mounted scene image stream to be analyzed on the basis that the vehicle-mounted scene image stream to be analyzed is longer, and the target vehicle-mounted road condition objects can be analyzed, so that the efficiency of determining the plurality of target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed is improved, and the timeliness of object space position feature analysis is improved. In addition, the vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be accurately captured through the vehicle-mounted road condition object capturing prompt, so that the accuracy of determining the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed can be improved, and the accuracy of analyzing the spatial position features of the objects can be further improved.
For some possible design considerations, the determining the target vehicle-mounted road condition object by the at least one basic vehicle-mounted road condition object as described in S13 may include the following: and on the basis that the number of the vehicle-mounted road condition objects of the at least one basic vehicle-mounted road condition object is a plurality of, performing vehicle-mounted road condition object checking processing (such as duplicate object de-duplication processing) on the plurality of basic vehicle-mounted road condition objects, and determining the target vehicle-mounted road condition object based on the basic vehicle-mounted road condition objects after the checking processing. By the design, the plurality of basic vehicle-mounted road condition objects can be checked on the basis that the number of the vehicle-mounted road condition objects is a plurality of basic vehicle-mounted road condition objects, so that misguidance in the process of analyzing the spatial position features of the objects is avoided, and the accuracy of analyzing the spatial position features of the objects is improved.
For some possible design considerations, the method further comprises: acquiring a blind area collision early-warning event set aiming at the blind area information, wherein the blind area collision early-warning event set comprises at least two blind area collision early-warning events; obtaining linkage coefficients (correlation or influence weights) between each blind area collision early-warning event in the blind area collision early-warning event set and the blind area information; according to the linkage coefficient corresponding to each blind area collision early-warning event and the trend prediction vector of each blind area collision early-warning event, the blind area collision early-warning events are sorted (such as sequencing processing) to obtain a corresponding blind area collision early-warning event sequence; and constructing a target early warning prompt sequence related to the blind area information based on the blind area collision early warning event sequence, wherein the target early warning prompt sequence comprises at least two target early warning prompt levels.
By the design, accurate and orderly target early warning prompt level sequencing can be realized based on the linkage coefficient and the trend prediction vector corresponding to the blind area collision early warning event, so that complete and accurate basis is provided for subsequent early warning processing.
For some possible design ideas, the sorting the blind area collision early-warning events according to the linkage coefficient corresponding to the blind area collision early-warning event and the trend prediction vector of the blind area collision early-warning event to obtain a corresponding blind area collision early-warning event sequence includes: disassembling all the blind area collision early-warning events according to the linkage coefficient corresponding to all the blind area collision early-warning events and the trend prediction vector of all the blind area collision early-warning events to obtain at least two blind area collision early-warning event subsets; and sorting all the blind area collision early-warning event subsets, and sorting all the blind area collision early-warning events in all the blind area collision early-warning event subsets respectively to obtain the blind area collision early-warning event sequence.
For some possible design ideas, the disassembling the blind area collision early-warning event according to the linkage coefficient corresponding to the blind area collision early-warning event and the trend prediction vector of the blind area collision early-warning event to obtain at least two blind area collision early-warning event subsets includes: weighting trend prediction vectors of all the blind area collision early-warning events according to the linkage coefficients corresponding to all the blind area collision early-warning events respectively to obtain collision trend prediction vectors of all the blind area collision early-warning events; grouping the blind area collision early-warning events according to the collision trend prediction vector of the blind area collision early-warning events to obtain at least two blind area collision early-warning event subsets.
For some possible design ideas, the sorting is performed between the blind area collision pre-warning event subsets, and each blind area collision pre-warning event in each blind area collision pre-warning event subset is sorted respectively, so as to obtain the blind area collision pre-warning event sequence, which specifically includes: according to the number of blind area collision early-warning events contained in each blind area collision early-warning event subset, sorting the blind area collision early-warning event subsets; and for each blind area collision early warning event subset, respectively executing the following operations: according to the correlation weights of the trend predictive vectors of all the blind area collision early-warning events in the blind area collision early-warning event subset and the blind area collision early-warning event subset, sorting all the blind area collision early-warning events in the blind area collision early-warning event subset; and generating the blind area collision early-warning event sequence based on the arrangement results among the blind area collision early-warning event subsets and the arrangement results of the blind area collision early-warning events in the blind area collision early-warning event subsets.
Therefore, the integrity of the blind area collision early-warning event sequence can be ensured, and the blind area collision early-warning event sequence is prevented from being missing.
Based on the same or similar inventive concept, please refer to fig. 2 in combination, a schematic architecture diagram of an intelligent early warning system 30 for a dead zone of an automobile based on image recognition is further provided, which includes an intelligent early warning cloud platform 10 for the dead zone of the automobile and a target vehicle 20 which are in communication with each other, wherein the intelligent early warning cloud platform 10 for the dead zone of the automobile and the target vehicle 20 are implemented or partially implemented in operation according to the technical scheme described in the method embodiment.
Further, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the above-described method.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The intelligent early warning method for the dead zone of the automobile based on image recognition is characterized by being applied to an intelligent early warning cloud platform for the dead zone of the automobile, and comprises the following steps:
receiving a to-be-analyzed vehicle-mounted scene image stream uploaded by a target vehicle in a running process, and determining a plurality of target vehicle-mounted road condition objects in the to-be-analyzed vehicle-mounted scene image stream;
determining at least one vehicle-mounted road condition object cluster through the plurality of target vehicle-mounted road condition objects, wherein each vehicle-mounted road condition object cluster comprises at least two target vehicle-mounted road condition objects;
determining a vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster; the vehicle-mounted road condition object space conversion matrix is used for reflecting the relative space positions among vehicle-mounted road condition object clusters which comprise the same target vehicle-mounted road condition objects in each vehicle-mounted road condition object cluster;
Analyzing object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed through the vehicle-mounted road condition object space conversion matrix;
and determining blind area information of the target vehicle in the driving process based on the object space position characteristics.
2. The method of claim 1, wherein the determining an on-vehicle road condition object space conversion matrix by the at least one on-vehicle road condition object cluster comprises:
on the basis that the number of the at least one vehicle-mounted road condition object cluster is a plurality of, determining road condition object binary group units based on the plurality of vehicle-mounted road condition object clusters to obtain a plurality of road condition object binary group units, wherein each vehicle-mounted road condition object cluster corresponds to one road condition object binary group unit;
generating the relative spatial position of the object unit between two random road condition object binary group units to obtain the spatial position characteristic distribution of the basic object;
and acquiring a space collision prediction variable between the two-tuple units of each road condition object, and cleaning out the relative space positions of the object units which do not accord with the space collision prediction variable in the basic object space position characteristic distribution to obtain the vehicle-mounted road condition object space conversion matrix.
3. The method according to claim 1, wherein the analyzing, by the vehicle-mounted road condition object space transformation matrix, the object space position features between the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed includes:
acquiring a spatial position characteristic analysis network of an object; loading the vehicle-mounted road condition object space conversion matrix into the object space position feature analysis network to perform analysis processing to obtain object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed, wherein the object space position features comprise at least one of the following: a first object space position feature between target vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster; and obtaining the object space position characteristics among the target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed by the second object space position characteristics among the vehicle-mounted road condition object clusters.
4. A method according to claim 3, characterized in that the method further comprises: acquiring a streaming sample cluster of a vehicle-mounted scene image; the vehicle-mounted scene image flow sample clusters comprise a plurality of vehicle-mounted scene image flow samples and network debugging prior basis, each vehicle-mounted scene image flow sample comprises spatial position feature distribution of an object to be debugged, and the network debugging prior basis of each vehicle-mounted scene image flow sample comprises spatial position keywords corresponding to the spatial position feature of the object of the vehicle-mounted road condition object included in each vehicle-mounted road condition object cluster to be debugged in the spatial position feature distribution of the object to be debugged and/or spatial position keywords of the spatial position feature of the object between the vehicle-mounted road condition object clusters; debugging the spatial position feature analysis network of the object to be debugged through the vehicle-mounted scene image stream sample cluster to obtain the spatial position feature analysis network of the object;
The debugging of the object space position feature analysis network to be debugged through the vehicle-mounted scene image stream sample cluster to obtain the object space position feature analysis network comprises the following steps: loading the vehicle-mounted scene image stream sample cluster into the spatial position characteristic analysis network of the object to be debugged for debugging, and obtaining a basic debugging report of each vehicle-mounted scene image stream sample; the basic debugging report is used for reflecting regression analysis data of object space position characteristics of the vehicle-mounted road condition objects included in each vehicle-mounted road condition object cluster to be debugged in the spatial position characteristic distribution of the objects to be debugged and/or regression analysis data of object space position characteristics among the vehicle-mounted road condition object clusters; determining a target debugging report in the basic debugging report, wherein the target debugging report is a debugging report which is different from a corresponding network debugging prior basis; optimizing bias coefficients of the cross entropy training cost through the target debugging report, and determining training cost indexes of the cross entropy training cost based on the bias coefficients after optimization; and optimizing network variables of the spatial position feature analysis network of the object to be debugged through the training cost index of the cross entropy training cost until the spatial position feature analysis network of the object to be debugged, which meets the debugging requirements, is obtained, and determining the spatial position feature analysis network of the object to be debugged, which meets the debugging requirements, as the spatial position feature analysis network of the object.
5. The method of claim 1, wherein the determining at least one vehicle-mounted road condition object cluster from the plurality of target vehicle-mounted road condition objects comprises: adding a corresponding vehicle-mounted road condition object category label for each target vehicle-mounted road condition object in the plurality of target vehicle-mounted road condition objects to obtain a plurality of vehicle-mounted road condition object category labels; determining the at least one vehicle-mounted road condition object cluster through the plurality of vehicle-mounted road condition object class labels, wherein each vehicle-mounted road condition object cluster comprises a vehicle-mounted road condition object class label of a target vehicle-mounted road condition object corresponding to the vehicle-mounted road condition object cluster;
the adding a corresponding vehicle-mounted road condition object category label for each of the plurality of target vehicle-mounted road condition objects includes: acquiring an object outline size image of each target vehicle-mounted road condition object in the plurality of target vehicle-mounted road condition objects; extracting image features of the object outline size image to obtain image feature extraction information, and determining a vehicle-mounted road condition object category label corresponding to the target vehicle-mounted road condition object according to the image feature extraction information;
The determining the vehicle-mounted road condition object category label corresponding to the target vehicle-mounted road condition object through the image feature extraction information comprises the following steps: on the basis of a plurality of image feature extraction information, determining weighting results of a plurality of image feature extraction information to obtain target image feature extraction information; and carrying out feature operation on the target image feature extraction information and the number of the image feature extraction information to obtain a vehicle-mounted road condition object category label of the target vehicle-mounted road condition object.
6. The method according to claim 1, wherein determining a number of target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed comprises: determining detection frame size information of a video image detection frame; the video image detection frame is used for capturing and detecting objects in the vehicle-mounted scene image stream to be analyzed; determining a target vehicle-mounted scene image in the video image detection frame in the vehicle-mounted scene image stream to be analyzed at the current time node, and analyzing a vehicle-mounted road condition object capturing prompt in the target vehicle-mounted scene image; analyzing an object outline size image corresponding to the vehicle-mounted road condition object in the target vehicle-mounted scene image through the vehicle-mounted road condition object capturing prompt to obtain at least one basic vehicle-mounted road condition object, and determining the target vehicle-mounted road condition object through the at least one basic vehicle-mounted road condition object;
Wherein the determining the target vehicle-mounted road condition object through the at least one basic vehicle-mounted road condition object includes: and on the basis that the number of the vehicle-mounted road condition objects of the at least one basic vehicle-mounted road condition object is a plurality of, performing vehicle-mounted road condition object checking processing on the plurality of basic vehicle-mounted road condition objects, and determining the target vehicle-mounted road condition object based on the basic vehicle-mounted road condition objects after the checking processing.
7. The method according to claim 1, wherein the method further comprises:
acquiring a blind area collision early-warning event set aiming at the blind area information, wherein the blind area collision early-warning event set comprises at least two blind area collision early-warning events; acquiring linkage coefficients between each blind area collision early-warning event in the blind area collision early-warning event set and the blind area information; according to the linkage coefficient corresponding to each blind area collision early-warning event and the trend prediction vector of each blind area collision early-warning event, the blind area collision early-warning events are arranged to obtain a corresponding blind area collision early-warning event sequence; constructing a target early warning prompt sequence related to the blind area information based on the blind area collision early warning event sequence, wherein the target early warning prompt sequence comprises at least two target early warning prompt levels;
The method comprises the steps of sorting all the blind area collision early-warning events according to the linkage coefficient corresponding to all the blind area collision early-warning events and the trend prediction vector of all the blind area collision early-warning events to obtain corresponding blind area collision early-warning event sequences, and comprises the following steps: disassembling all the blind area collision early-warning events according to the linkage coefficient corresponding to all the blind area collision early-warning events and the trend prediction vector of all the blind area collision early-warning events to obtain at least two blind area collision early-warning event subsets; sorting all the blind area collision early-warning event subsets, and sorting all the blind area collision early-warning events in all the blind area collision early-warning event subsets respectively to obtain the blind area collision early-warning event sequence;
the method includes the steps of dismantling each blind area collision early-warning event according to the linkage coefficient corresponding to each blind area collision early-warning event and the trend prediction vector of each blind area collision early-warning event to obtain at least two blind area collision early-warning event subsets, and comprises the following steps: weighting trend prediction vectors of all the blind area collision early-warning events according to the linkage coefficients corresponding to all the blind area collision early-warning events respectively to obtain collision trend prediction vectors of all the blind area collision early-warning events; grouping the blind area collision early-warning events according to the collision trend prediction vector of the blind area collision early-warning events to obtain at least two blind area collision early-warning event subsets;
The method comprises the steps of sorting all blind area collision early-warning event subsets, sorting all blind area collision early-warning events in all blind area collision early-warning event subsets respectively to obtain blind area collision early-warning event sequences, and specifically comprises the following steps: according to the number of blind area collision early-warning events contained in each blind area collision early-warning event subset, sorting the blind area collision early-warning event subsets; and for each blind area collision early warning event subset, respectively executing the following operations: according to the correlation weights of the trend predictive vectors of all the blind area collision early-warning events in the blind area collision early-warning event subset and the blind area collision early-warning event subset, sorting all the blind area collision early-warning events in the blind area collision early-warning event subset; and generating the blind area collision early-warning event sequence based on the arrangement results among the blind area collision early-warning event subsets and the arrangement results of the blind area collision early-warning events in the blind area collision early-warning event subsets.
8. An intelligent early warning system for a dead zone of an automobile based on image recognition is characterized by comprising an intelligent early warning cloud platform for the dead zone of the automobile and a target vehicle which are communicated with each other;
The intelligent early warning cloud platform for the automobile blind area is used for: receiving a to-be-analyzed vehicle-mounted scene image stream uploaded by a target vehicle in a running process, and determining a plurality of target vehicle-mounted road condition objects in the to-be-analyzed vehicle-mounted scene image stream; determining at least one vehicle-mounted road condition object cluster through the plurality of target vehicle-mounted road condition objects, wherein each vehicle-mounted road condition object cluster comprises at least two target vehicle-mounted road condition objects; determining a vehicle-mounted road condition object space conversion matrix through the at least one vehicle-mounted road condition object cluster; the vehicle-mounted road condition object space conversion matrix is used for reflecting the relative space positions among vehicle-mounted road condition object clusters which comprise the same target vehicle-mounted road condition objects in each vehicle-mounted road condition object cluster; analyzing object space position features among all target vehicle-mounted road condition objects in the vehicle-mounted scene image stream to be analyzed through the vehicle-mounted road condition object space conversion matrix; and determining blind area information of the target vehicle in the driving process based on the object space position characteristics.
9. An intelligent early warning cloud platform for an automobile blind area is characterized by comprising a processor and a memory; the processor is communicatively connected to the memory, the processor being configured to read a computer program from the memory and execute the computer program to implement the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that a program is stored thereon, which program, when being executed by a processor, implements the method of any of claims 1-7.
CN202311445646.6A 2023-11-02 2023-11-02 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform Active CN117373248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311445646.6A CN117373248B (en) 2023-11-02 2023-11-02 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311445646.6A CN117373248B (en) 2023-11-02 2023-11-02 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform

Publications (2)

Publication Number Publication Date
CN117373248A true CN117373248A (en) 2024-01-09
CN117373248B CN117373248B (en) 2024-06-21

Family

ID=89402198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311445646.6A Active CN117373248B (en) 2023-11-02 2023-11-02 Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform

Country Status (1)

Country Link
CN (1) CN117373248B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050195383A1 (en) * 1994-05-23 2005-09-08 Breed David S. Method for obtaining information about objects in a vehicular blind spot
US20190050711A1 (en) * 2017-08-08 2019-02-14 Neusoft Corporation Method, storage medium and electronic device for detecting vehicle crashes
US20200126424A1 (en) * 2018-10-18 2020-04-23 Cartica Ai Ltd Blind Spot Alert
CN111923857A (en) * 2020-09-24 2020-11-13 深圳佑驾创新科技有限公司 Vehicle blind area detection processing method and device, vehicle-mounted terminal and storage medium
CN112216097A (en) * 2019-07-09 2021-01-12 华为技术有限公司 Method and device for detecting blind area of vehicle
CN116704108A (en) * 2022-02-24 2023-09-05 比亚迪股份有限公司 Road condition modeling method, road condition model obtaining method, cloud platform and vehicle-mounted system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050195383A1 (en) * 1994-05-23 2005-09-08 Breed David S. Method for obtaining information about objects in a vehicular blind spot
US20190050711A1 (en) * 2017-08-08 2019-02-14 Neusoft Corporation Method, storage medium and electronic device for detecting vehicle crashes
US20200126424A1 (en) * 2018-10-18 2020-04-23 Cartica Ai Ltd Blind Spot Alert
CN112216097A (en) * 2019-07-09 2021-01-12 华为技术有限公司 Method and device for detecting blind area of vehicle
CN111923857A (en) * 2020-09-24 2020-11-13 深圳佑驾创新科技有限公司 Vehicle blind area detection processing method and device, vehicle-mounted terminal and storage medium
CN116704108A (en) * 2022-02-24 2023-09-05 比亚迪股份有限公司 Road condition modeling method, road condition model obtaining method, cloud platform and vehicle-mounted system

Also Published As

Publication number Publication date
CN117373248B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN111507460B (en) Method and apparatus for detecting parking space in order to provide automatic parking system
CN110796007B (en) Scene recognition method and computing device
CN113574524A (en) Method and system for obstacle detection
WO2020123712A1 (en) Fusion-based traffic light recognition for autonomous driving
US20240071230A1 (en) Methods for characterizing a vehicle collision
US20220242427A1 (en) Systems for characterizing a vehicle collision
WO2020007589A1 (en) Training a deep convolutional neural network for individual routes
CN115690153A (en) Intelligent agent track prediction method and system
CN114419552A (en) Illegal vehicle tracking method and system based on target detection
CN117372979A (en) Road inspection method, device, electronic equipment and storage medium
CN113269042A (en) Intelligent traffic management method and system based on running vehicle violation identification
CN113781471B (en) Automatic driving test field system and method
CN110807493A (en) Optimization method and equipment of vehicle classification model
CN117373248B (en) Image recognition-based intelligent early warning method and system for automobile blind area and cloud platform
CN114968187A (en) Platform for perception system development of an autopilot system
CN112509321A (en) Unmanned aerial vehicle-based driving control method and system for urban complex traffic situation and readable storage medium
CN111723695A (en) Improved Yolov 3-based driver key sub-area identification and positioning method
CN115063764B (en) State estimation method and device of traffic signal lamp and electronic equipment
CN110659534A (en) Shared bicycle detection method and device
CN115762153A (en) Method and device for detecting backing up
Afdhal et al. Evaluation of benchmarking pre-trained cnn model for autonomous vehicles object detection in mixed traffic
CN113887284A (en) Target object speed detection method, device, equipment and readable storage medium
US11195287B2 (en) Method and device for checking the plausibility of a flow vector hypothesis
CN116989818B (en) Track generation method and device, electronic equipment and readable storage medium
CN114694112B (en) Traffic signal lamp identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant