CN111967377A - Method, device and equipment for identifying state of engineering vehicle and storage medium - Google Patents

Method, device and equipment for identifying state of engineering vehicle and storage medium Download PDF

Info

Publication number
CN111967377A
CN111967377A CN202010821021.5A CN202010821021A CN111967377A CN 111967377 A CN111967377 A CN 111967377A CN 202010821021 A CN202010821021 A CN 202010821021A CN 111967377 A CN111967377 A CN 111967377A
Authority
CN
China
Prior art keywords
vehicle
image
engineering vehicle
target
engineering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010821021.5A
Other languages
Chinese (zh)
Inventor
胡威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010821021.5A priority Critical patent/CN111967377A/en
Publication of CN111967377A publication Critical patent/CN111967377A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention provides a method, a device, equipment and a storage medium for identifying the state of an engineering vehicle, wherein the method comprises the following steps: acquiring a target video acquired by a target area vision sensor; determining a road scene corresponding to a target area according to the target video; if the road scene is a construction scene, identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result; and determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result. Whether a road scene is a construction scene is determined firstly, then the states of the engineering vehicles are identified from a target video of the construction scene, the engineering vehicles in the construction state can be accurately and quickly read and determined, then the construction state of the engineering vehicles is notified to a traffic police department in advance, traffic is controlled, and/or the construction vehicles in front are notified through a roadside electronic display screen according to the construction state of the engineering vehicles, so that traffic jam and traffic accidents are effectively prevented.

Description

Method, device and equipment for identifying state of engineering vehicle and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method, a device, equipment and a storage medium for identifying the state of an engineering vehicle.
Background
With the rapid development of society, intelligent transportation is rapidly developed. The intelligent traffic is an advanced integrated traffic comprehensive management system, and the traffic is safer and more efficient through the cooperative action of an advanced sensing technology, an advanced communication means, a computer processing technology and the like.
However, in the current intelligent traffic, the traffic jam condition and the traffic accident condition are mainly and rapidly identified and processed. But another important reason for traffic congestion and insecurity is that there are engineering vehicles in construction in traffic scenes such as street intersections, highways, and the like.
However, in the current intelligent traffic, there is no identification method for identifying whether an engineering vehicle is under construction, which causes frequent traffic jam and accidents caused by the construction of the engineering vehicle, so that it is an urgent technical problem to be solved in the intelligent traffic to identify whether the state of the construction engineering vehicle can be accurately and efficiently.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for identifying the state of an engineering vehicle, which solve the technical problem that the phenomena of traffic jam and accidents caused by the construction of the engineering vehicle frequently occur because no identification method for identifying whether the engineering vehicle is in construction exists in the prior art.
In a first aspect, an embodiment of the present invention provides a method for identifying a state of an engineering vehicle, including:
acquiring a target video acquired by a target area vision sensor;
determining a road scene corresponding to the target area according to the target video;
if the road scene is a construction scene, identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result;
and determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result.
In a second aspect, an embodiment of the present invention provides an engineering vehicle state identification device, including:
the video acquisition module is used for acquiring a target video acquired by the target area vision sensor;
the scene determining module is used for determining a road scene corresponding to the target area according to the target video;
the vehicle identification module is used for identifying the engineering vehicle according to the target video if the road scene is a construction scene so as to obtain an engineering vehicle identification result;
and the state determining module is used for determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
a memory, a processor, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of the first aspects.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method according to any one of the first aspect.
The embodiment of the invention provides a method, a device, equipment and a storage medium for identifying the state of an engineering vehicle, wherein a target video acquired by a target area vision sensor is acquired; determining a road scene corresponding to a target area according to the target video; if the road scene is a construction scene, identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result; and determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result. Whether a road scene is a construction scene is determined firstly, then the states of the engineering vehicles are identified from a target video of the construction scene, the engineering vehicles in the construction state can be accurately and quickly read and determined, then the construction state of the engineering vehicles is notified to a traffic police department in advance, traffic is controlled, and/or the construction vehicles in front are notified through a roadside electronic display screen according to the construction state of the engineering vehicles, so that traffic jam and traffic accidents are effectively prevented.
It should be understood that what is described in the summary above is not intended to limit key or critical features of embodiments of the invention, nor is it intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram of an application scenario in which the method for identifying a state of an engineering vehicle according to an embodiment of the present invention may be implemented;
FIG. 2 is a diagram of another application scenario in which the method for identifying the state of the engineering vehicle according to the embodiment of the present invention may be implemented;
fig. 3 is a flowchart of a method for identifying a state of an engineering vehicle according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for identifying a state of an engineering vehicle according to a second embodiment of the present invention;
fig. 5 is a flowchart of step 204 in the method for identifying a state of an engineering vehicle according to the second embodiment of the present invention;
fig. 6 is a flowchart of step 209 in the method for identifying a state of an engineering vehicle according to the second embodiment of the present invention;
fig. 7 is a flowchart of step 211 in the method for identifying a state of an engineering vehicle according to the second embodiment of the present invention;
fig. 8 is a schematic structural diagram of a state identification device for a construction vehicle according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an engineering vehicle state identification device according to a fourth embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present invention. It should be understood that the drawings and the embodiments of the present invention are illustrative only and are not intended to limit the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For clear understanding of the technical solutions of the embodiments of the present invention, the prior art solutions are first described in detail.
In the prior art, intelligent traffic has no identification method for whether an engineering vehicle is under construction, so that traffic jam and accidents caused by construction of the engineering vehicle frequently occur.
Therefore, in order to solve the problem that no identification method for whether the engineering vehicle is under construction exists in the prior art, the inventor finds that the vehicle identification model based on the deep learning algorithm can accurately identify the vehicle in research. Therefore, the engineering vehicle can be identified by adopting the vehicle identification model based on the deep learning algorithm, and the working state of the engineering vehicle is identified according to the characteristics of the engineering vehicle in the construction state and the non-construction state. Through further research by the inventor, the general engineering vehicle can be in a construction scene when the engineering vehicle is in a construction state. Therefore, the target video acquired by the target area vision sensor can be acquired firstly, whether the road scene corresponding to the target area is a construction scene or not is determined according to the target video, if the road scene is the construction scene, the engineering vehicles in the target video are identified, and then the working state of the engineering vehicles is determined according to the images of the engineering vehicles included in the engineering vehicle identification result. Whether a road scene is a construction scene is determined firstly, then the states of the engineering vehicles are identified from a target video of the construction scene, the engineering vehicles in the construction state can be accurately and quickly read and determined, then the construction state of the engineering vehicles is notified to a traffic police department in advance, traffic is controlled, and/or the construction vehicles in front are notified through a roadside electronic display screen according to the construction state of the engineering vehicles, so that traffic jam and traffic accidents are effectively prevented.
An application scenario of the method for identifying the state of the engineering vehicle provided by the embodiment of the invention is described below. As shown in fig. 1, an application scenario corresponding to the engineering vehicle state identification method provided by the embodiment of the present invention may include: a vision sensor 1, an electronic device 2 and a user terminal 3. Wherein the vision sensor 1 is arranged in the target area. The electronic device 2 communicates with the vision sensor 1 and the user terminal 3, respectively. The target area can be a preset street intersection, an expressway and other areas. The vision sensor may periodically acquire video data of the target area, which is the target video, and transmit the target video to the electronic device 2. The electronic device 2 determines a road scene corresponding to the target area according to the target video. As shown in fig. 1, if it is determined that the road scene corresponding to the target area is a construction scene, the engineering vehicle is identified according to the target video to obtain an engineering vehicle identification result. As shown in fig. 2, the electronic device 2 recognizes that the work vehicle is included in the target video. The operating state of the construction vehicle is determined based on the image including the construction vehicle included in the construction vehicle recognition result. The working state of the work vehicle is determined as the construction state as in fig. 2. The electronic device 2 may send first warning information, which may be included in the first warning information, to the user terminal; the identification of the target area may also include the target video. The user can control the traffic of the target area according to the first warning information received by the user terminal. Wherein the user may be a traffic police.
It can be understood that the application scenario of the engineering vehicle state identification method provided by the embodiment of the invention can also be other application scenarios. Illustratively, as shown in fig. 2, in another application scenario, the method further includes: an electronic display screen 4. The electronic display screen may be disposed at a preset distance from the target area. After the electronic device 2 determines that the working state of the engineering vehicle is the construction state, in addition to sending the first warning information to the user terminal 3, the electronic device may also send second warning information to the electronic display screen 4, where the second warning information may include: the distance between the engineering vehicle and the electronic display screen. The electronic display 4 may display the distance of the construction vehicle from the electronic display to inform other vehicles and/or pedestrians of the presence of the construction vehicle in front of them through the roadside electronic display in advance.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example one
Fig. 3 is a flowchart of a method for recognizing a state of an engineering vehicle according to an embodiment of the present invention, and as shown in fig. 3, an execution subject of the method for recognizing a state of an engineering vehicle according to the embodiment is an engineering vehicle state recognition device, and the engineering vehicle state recognition device may be integrated into an electronic device. The electronic device can be a computer, a notebook computer, a server or a server cluster and other devices with independent computing and processing capabilities. The engineering vehicle state identification method provided by the embodiment includes the following steps.
Step 101, acquiring a target video acquired by a target area vision sensor.
In this embodiment, the visual sensor may be a linear array or an area array CCD camera or a TV camera, a digital camera, a camera, etc., which is not limited in this embodiment.
In this embodiment, the target area is an area where the state of the engineering vehicle needs to be identified, for example, the target area may be an area corresponding to a street intersection, a section of highway, or the like.
In this embodiment, before obtaining a target video collected by a target area vision sensor, the vision sensor may be first fixedly disposed at the roadside or above the road of the target area, so that the viewing angle of the vision sensor faces the road of the target area.
A communication connection between the electronic device and the vision sensor is then established. The communication mode between the electronic device and the visual sensor may be Global System for Mobile communication (GSM), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Long Term Evolution (Long Term Evolution, LTE), or future 5G. This is not limited in this embodiment.
The vision sensor may periodically perform video acquisition on the target area, acquire and store the target video. Optionally, the electronic device may send a video acquisition request to the visual sensor, and the visual sensor acquires the target video according to the video acquisition request and sends the target video to the electronic device. Or optionally, after the video is acquired by the visual sensor in each period, the video is actively sent to the electronic device as the target video, so that the electronic device receives the target video acquired by the visual sensor.
Wherein, the target video comprises a plurality of frames of target images.
And step 102, determining a road scene corresponding to the target area according to the target video.
In this embodiment, an object in the target video may be first identified, and a road scene corresponding to the target area may be determined according to the identified object type.
The road scene can be a construction scene and a non-construction scene. The non-construction scene can be a traffic scene.
For example, if the object in the target video is identified, and the object in the target video includes a non-engineering vehicle such as an electric vehicle, a tricycle, a car, a bus, and/or a pedestrian, the road scene is determined to be a non-construction scene. If the object in the target video is identified, the object in the target video comprises: and determining the road scene as a construction scene by using construction element objects such as yellow road cones, warning boards, construction fences, construction workers and the like.
And 103, if the road scene is a construction scene, identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result.
In this embodiment, if it is determined that the road scene is a construction scene, a vehicle identification algorithm may be used to identify each frame of image in the target video, and determine whether an engineering vehicle exists in the target video, so as to obtain an identification result of the engineering vehicle.
The engineering vehicle can be a large trailer, a road sweeper, a road wrecker, a concrete pump, a stirrer, a crane and the like, and the type of the engineering vehicle is not limited in the embodiment.
And 104, determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result.
In this embodiment, an image including the engineering vehicle may be acquired, the state feature information of the engineering vehicle may be extracted by using the state recognition model, and the operating state of the engineering vehicle may be determined according to the state feature information, and the operating state of the engineering vehicle may be output.
The working state of the engineering vehicle can be a construction state or a non-construction state.
According to the engineering vehicle state identification method provided by the embodiment, a target video acquired by a target area vision sensor is acquired; determining a road scene corresponding to a target area according to the target video; if the road scene is a construction scene, identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result; and determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result. Whether a road scene is a construction scene is determined firstly, then the states of the engineering vehicles are identified from a target video of the construction scene, the engineering vehicles in the construction state can be accurately and quickly read and determined, then the construction state of the engineering vehicles is notified to a traffic police department in advance, traffic is controlled, and/or the construction vehicles in front are notified through a roadside electronic display screen according to the construction state of the engineering vehicles, so that traffic jam and traffic accidents are effectively prevented.
Example two
Fig. 4 is a flowchart of a method for identifying a state of an engineering vehicle according to a second embodiment of the present invention, and as shown in fig. 4, the method for identifying a state of an engineering vehicle according to the present embodiment is further detailed in steps 102 to 103 on the basis of the method for identifying a state of an engineering vehicle according to the first embodiment of the present invention, and the method for identifying a state of an engineering vehicle according to the present embodiment includes the following steps.
Step 201, a target video acquired by a target area vision sensor is acquired.
In this embodiment, the implementation manner of step 201 is similar to that of step 101 in the first embodiment of the present invention, and is not described in detail here.
Step 202, each frame of target image in the target video is obtained.
In this embodiment, the target video is composed of a group of image frames, so that the target video is analyzed to obtain multiple frames of images, and each frame of image is the target image.
In this embodiment, the target image is a color image, such as an RGB image or other types of color images.
Step 203, converting each frame of target image into a gray image, and performing gray value normalization processing on the gray image; and carrying out size adjustment processing on the target image subjected to the gray value normalization processing.
Optionally, in this embodiment, each frame of color icon image is converted into a gray scale image by using a conversion algorithm, and the gray scale value in the gray scale image is normalized, so that the gray scale value in the gray scale image is within a uniform preset range. And the size of the target image after the gray value normalization processing can be adjusted by adopting a resize function, and the size of the target image after the gray value normalization processing is generally reduced.
According to the engineering vehicle state identification method provided by the embodiment, after each frame of target image in a target video is obtained, each frame of target image is converted into a gray level image, and gray level normalization processing is performed on the gray level image; the size of the target image after the gray value normalization processing is adjusted, so that the calculation resources consumed by the target image processing can be greatly reduced when the road scene and the engineering vehicle are identified according to the target image, and the speed of identifying the road scene and the engineering vehicle is improved.
And 204, identifying the construction element object in each frame of target image to obtain a construction element object identification result.
Wherein the construction element object may include: blue iron sheet, yellow road cone, warning board, construction fence, construction worker, masonry block and the like.
Optionally, in this embodiment, as shown in fig. 5, step 204 includes the following steps:
step 2041, inputting each frame of target image into a first object recognition model, recognizing the construction element object through the first object recognition model, and outputting an initial object recognition result through the first object recognition model.
Alternatively, the first object recognition model may be a centernet model trained to converge.
In this embodiment, before the first object recognition model is used to recognize the construction element object in each frame of the target image, the first initial object recognition model is trained. When the first initial object recognition model is trained, a training sample is obtained, the training sample of the first initial object recognition model is an image comprising the construction element object, and the type of the construction element object is marked in the image. The types of the construction element objects can be blue iron sheet, yellow road cone, warning board, construction fence, construction worker, masonry block and the like. Wherein the image of the training sample may be the same size as the target image.
And then inputting the training sample into the first initial object recognition model, training the first initial object recognition model, judging whether the preset convergence condition is met, and if the preset convergence condition is met, determining the first initial object recognition model meeting the preset convergence condition as the first object recognition model.
In this embodiment, after the first object identification model is obtained, each frame of target image is input into the first object identification model, the first object identification model performs processing such as feature extraction, classification and identification on each frame of target image, and finally determines whether each frame of target image includes a construction element object, if it is determined that the construction element object is included, an initial object identification result including the construction element object and the type of the construction element object in the target image is output, and the position of the construction element object in the corresponding target image is output. And if the construction element object is determined not to be included, outputting an initial object recognition result of the target image without the construction element object.
Step 2042, acquiring an image of the initial object recognition result including the construction element object, and performing cropping processing on the image including the construction element object to obtain an object cropped image.
In the embodiment, an image including a construction element object is acquired according to the initial object recognition result, and the image including the construction element object is subjected to trimming processing according to the position of the construction element object in the image in the initial recognition result, and when the image including the construction element object is subjected to trimming processing, the image may be trimmed to a preset size, and after the trimming processing is performed, the trimmed image including the construction element object is taken as an object trimming image.
Step 2043, inputting the object cropped image into the second object recognition model, recognizing the construction element object again through the second object recognition model, and outputting the final object recognition result through the second object recognition model.
Alternatively, in this embodiment, the second object recognition model may be an ssd model trained to converge, a yolo model trained to converge, or a centeret model trained to converge, which is not limited in this embodiment.
Similarly, in this embodiment, before the construction element object in the object trimming image is identified by using the second object identification model, the second initial object identification model is trained. Unlike the training of the first initial object recognition model, the training samples of the second initial object recognition model may be the same size as the object cropped image, and smaller than the image of the training sample corresponding to the first initial object recognition model.
Step 2044, the final object recognition result is determined as a construction element object recognition result.
In this embodiment, after the second object recognition model is obtained, the object trimming image is input into the second object recognition model, the second object recognition model performs feature extraction and classification recognition processing on the construction element object in the object trimming image again, determines whether the object trimming image includes the construction element object again, and outputs a recognition result of whether the object trimming image includes the construction element object, where the recognition result is a final object recognition result. And determining the final object recognition result as a construction element object recognition result.
In the method for identifying the state of the engineering vehicle, when a construction element object in each frame of target image is identified to obtain a construction element object identification result, each frame of target image is input into a first object identification model, the construction element object is identified through the first object identification model, and an initial object identification result is output through the first object identification model; acquiring an image of the initial object identification result, which comprises a construction element object, and cutting the image of the construction element object to obtain an object cut image; inputting the object trimming image into a second object recognition model, recognizing the construction element object again through the second object recognition model, and outputting a final object recognition result through the second object recognition model; and determining the final object recognition result as a construction element object recognition result. The image including the construction element object can be preliminarily screened by adopting the first object identification model, and then whether the target image includes the construction element object or not is accurately identified by adopting the second object identification model, so that the phenomenon that the construction element object is missed and mistakenly detected can be effectively avoided, and the accuracy of construction element object identification is improved.
And step 205, determining the number of the construction element objects according to the construction element object identification result.
In this embodiment, the construction element identification result includes whether the construction element object and the construction element object type exist in each frame of object trimming image, so that statistics can be performed on the number of the construction element objects of different types in the object trimming image in which the construction element object exists, and the number of the construction element objects in the construction scene can be determined.
And step 206, if the number of the construction element objects is larger than or equal to the preset number threshold, determining that the road scene corresponding to the target area is a construction scene.
And step 207, if the number of the construction element objects is smaller than the preset number threshold, determining that the road scene corresponding to the target area is a non-construction scene.
Optionally, in this embodiment, since the number of the construction element objects is generally large in the construction scene, a number threshold may be preset according to the construction scene, the number of the construction element objects is compared with the preset number threshold, if the number of the construction element objects is greater than or equal to the preset number threshold, the road scene corresponding to the target area is determined as the construction scene, otherwise, the road scene corresponding to the target area is determined as the non-construction scene.
It can be understood that if the road scene corresponding to the given target area is determined to be a non-construction scene, the engineering vehicle can be identified according to the target video, and if the engineering vehicle is identified, the working state of the engineering vehicle can be judged according to time sequence information before and after the target video, behavior analysis of people appearing in the video and the like. For example, if the engineering vehicle in the target video has a position deviation between adjacent frames, it is determined that the engineering vehicle is in a traveling state, and if a construction worker exists in a construction scene, it is determined whether the engineering vehicle is in a construction state through a human posture.
And step 208, if the road scene is a construction scene, acquiring each frame of target image in the target video.
It can be understood that after each frame of target image in the target video is obtained, each frame of target image is also converted into a gray level image, and the gray level image is subjected to gray level normalization processing; and carrying out size adjustment processing on the target image subjected to the gray value normalization processing.
And step 209, identifying the preset target vehicle for each frame of target image to obtain an identification result of the preset target vehicle.
The preset target vehicle is a motor vehicle including an engineering vehicle, such as an electric vehicle, a tricycle, a car, a passenger car, an engineering vehicle and the like.
Optionally, in this embodiment, as shown in fig. 6, step 209 includes the following steps:
at step 2091, each frame of target image is input into the first vehicle identification model.
Optionally, in this embodiment, the first vehicle recognition model may be a yolo model trained to converge, and since the yolo model takes less time to recognize, the speed of vehicle recognition may be increased.
In this embodiment, before the first vehicle identification model is used to identify the preset target vehicle in each frame of target image, the first initial vehicle identification model is trained. When the first initial vehicle recognition model is trained, a training sample is obtained, wherein the training sample is an image comprising at least one preset target vehicle, and the type of the preset target vehicle is marked in the image. Wherein the image of the training sample may be the same size as the target image.
And then inputting the training sample into the first initial vehicle recognition model, training the first initial vehicle recognition model, judging whether the preset convergence condition is met, and if the preset convergence condition is met, determining the first initial vehicle recognition model meeting the preset convergence condition as the first vehicle recognition model.
Step 2092, identifying the preset target vehicle through the first vehicle identification model, and outputting the identification result of the preset target vehicle through the first vehicle identification model.
In this embodiment, after a first vehicle identification model is obtained, each frame of target image is input into the first vehicle identification model, the first vehicle identification model performs processing such as feature extraction and classification recognition on each frame of target image, and finally determines whether each frame of target image includes a preset target vehicle, if it is determined that the preset target vehicle is included, the identification result of the preset target vehicle and the type of the preset target vehicle in the target image is output, and the position of the preset target vehicle in the corresponding target image is output. And if the preset target vehicle is determined not to be included, outputting the recognition result that the preset target vehicle is not included in the target image.
Step 210, acquiring an image of the preset target vehicle included in the recognition result of the preset target vehicle, and performing cutting processing on the image of the preset target vehicle to obtain a vehicle cut image.
In this embodiment, the implementation manner of step 210 is similar to that of step 2042, and is not described herein again.
And step 211, identifying the engineering vehicle according to the vehicle trimming image to obtain an engineering vehicle identification result.
Optionally, in this embodiment, as shown in fig. 7, step 211 includes the following steps:
step 2111, the vehicle cropped image is input into the second vehicle identification model.
In this embodiment, the second vehicle identification model may be a yolo model trained to converge, an ssd model trained to converge, or a centeret model trained to converge, which is not limited in this embodiment.
And step 2112, identifying the engineering vehicle through the second vehicle identification model, and outputting an engineering vehicle identification result through the second vehicle identification model.
Similarly, in this embodiment, before the engineering vehicle in each frame of vehicle trimming image is identified by using the second vehicle identification model, the second initial vehicle identification model is trained. When the second initial vehicle recognition model is trained, a training sample is obtained, the training sample is an image comprising at least one engineering vehicle, and the engineering vehicle is marked in the image. Wherein the image of the training sample may be the same size as the vehicle trim image.
The process of training the second vehicle identification model is similar to the process of training the first vehicle identification model, and is not repeated here.
In this embodiment, after the second vehicle identification model is obtained, each frame of vehicle trimming image is input into the second vehicle identification model, the second vehicle identification model performs processing such as feature extraction, classification and identification on each frame of vehicle trimming image, and finally determines whether each frame of vehicle trimming image includes an engineering vehicle, and if it is determined that the engineering vehicle is included, outputs an identification result that the target image includes the engineering vehicle, and outputs a position of the engineering vehicle in the corresponding vehicle trimming image. And if the engineering vehicle is determined not to be included, outputting the recognition result that the engineering vehicle is not included in the target image.
In step 212, the state of the lamp of the engineering vehicle is determined according to the image comprising the engineering vehicle.
Optionally, in this embodiment, step 212 includes the following steps:
step 2121, inputting an image comprising the engineering vehicle into the vehicle lamp identification model.
And step 2122, recognizing the lamp state of the engineering vehicle through the lamp recognition model, and outputting the lamp state of the engineering vehicle through the lamp recognition model.
Optionally, in this embodiment, the car light identification model is a car light identification model trained to converge. The method comprises the steps of inputting an image including the engineering vehicle into a vehicle lamp identification model, extracting vehicle lamp characteristics of the engineering vehicle by the vehicle lamp identification model, carrying out classification identification, obtaining the state of a vehicle lamp in each frame of image including the engineering vehicle, and outputting the state of the vehicle lamp of the engineering vehicle.
The state of the vehicle lamp can be a lighting state or a lighting-off state. The vehicle lamp is a flashing lamp of an engineering vehicle.
And step 213, determining the working state of the engineering vehicle according to the lamp state of the engineering vehicle.
Optionally, in this embodiment, step 213 includes the following steps:
and step 2131, if the lamp state of the engineering vehicle is determined to be a lighting state, determining that the working state of the engineering vehicle is a construction state.
And step 2132, if the vehicle state of the engineering vehicle is determined to be a light-out state, determining that the working state of the engineering vehicle is a suspected construction state.
Specifically, in this embodiment, since the lamp state of the engineering vehicle is the on state during the construction of the engineering vehicle, the working state of the engineering vehicle is determined according to the lamp state of the engineering vehicle, if the lamp state is the on state, the working state of the engineering vehicle is determined to be the construction state, otherwise, if the lamp state is the off state, the working state of the engineering vehicle is determined to be the suspected construction state.
After step 2132, 2133 is also included.
And 2133, sending warning information to the user terminal, wherein the warning information comprises identification information of the target video, and the warning information is used for indicating a user to determine the working state of the engineering vehicle according to the target video.
Optionally, in this embodiment, if it is determined that the lamp status is the light-out status, the lamp of the engineering vehicle may be damaged and cannot be turned on, and the engineering vehicle is in the construction status, so in this embodiment, warning information is sent to the user terminal, where the warning information includes identification information of the target video, so that a user obtains the target video according to the identification information of the target video obtained by the user terminal, and the user manually assists to determine the working status of the engineering vehicle according to the target video.
According to the engineering vehicle state identification method provided by the embodiment, when an engineering vehicle is identified according to the target video to obtain an engineering vehicle identification result, each frame of target image in the target video is obtained; recognizing a preset target vehicle for each frame of target image to obtain a recognition result of the preset target vehicle; acquiring an image including a preset target vehicle in the recognition result of the preset target vehicle, and cutting the image including the preset target vehicle to obtain a vehicle cut image; and identifying the engineering vehicle according to the vehicle trimming image to obtain an engineering vehicle identification result. The method has the advantages that the preset target vehicle can be recognized firstly, then the engineering vehicle is recognized from the image comprising the preset target vehicle, the phenomenon that the engineering vehicle is missed to be detected can be effectively avoided, and the accuracy of engineering vehicle recognition is improved.
EXAMPLE III
Fig. 8 is a schematic structural diagram of a state identification device for a construction vehicle according to a third embodiment of the present invention, and as shown in fig. 8, a state identification device 30 for a construction vehicle according to the third embodiment is integrated into an electronic device. The engineering vehicle state identification device provided by the embodiment includes: a video acquisition module 31, a scene determination module 32, a vehicle identification module 33, and a status determination module 34.
The video acquiring module 31 is configured to acquire a target video acquired by a target area vision sensor. And the scene determining module 32 is configured to determine a road scene corresponding to the target area according to the target video. And the vehicle identification module 33 is configured to identify the engineering vehicle according to the target video if the road scene is a construction scene, so as to obtain an engineering vehicle identification result. And the state determining module 34 is used for determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result.
The engineering vehicle state identification device provided in this embodiment may implement the technical solution of the method embodiment shown in fig. 3, and the implementation principle and technical effect are similar, which are not described herein again.
Example four
Fig. 9 is a schematic structural diagram of a state identification device for a construction vehicle according to a fourth embodiment of the present invention, and as shown in fig. 9, a state identification device 40 for a construction vehicle according to the present embodiment is based on a state identification device 30 for a construction vehicle according to a third embodiment of the present invention, and further includes: an image preprocessing module 41 and an information transmitting module 42.
Optionally, the scene determining module 32 is specifically configured to:
acquiring each frame of target image in a target video; identifying the construction element object in each frame of target image to obtain a construction element object identification result; determining the number of the construction element objects according to the construction element object identification result; if the number of the construction element objects is larger than or equal to the preset number threshold, determining that a road scene corresponding to the target area is a construction scene; and if the number of the construction element objects is smaller than the preset number threshold, determining that the road scene corresponding to the target area is a non-construction scene.
Optionally, the scene determining module 32, when identifying the construction element object in each frame of the target image to obtain a construction element object identification result, is specifically configured to:
inputting each frame of target image into a first object recognition model, recognizing the construction element object through the first object recognition model, and outputting an initial object recognition result through the first object recognition model; acquiring an image of the initial object identification result, which comprises a construction element object, and cutting the image of the construction element object to obtain an object cut image; inputting the object trimming image into a second object recognition model, recognizing the construction element object again through the second object recognition model, and outputting a final object recognition result through the second object recognition model; and determining the final object recognition result as a construction element object recognition result.
Optionally, the vehicle identification module 33 is specifically configured to:
acquiring each frame of target image in a target video; recognizing a preset target vehicle for each frame of target image to obtain a recognition result of the preset target vehicle; acquiring an image including a preset target vehicle in the recognition result of the preset target vehicle, and cutting the image including the preset target vehicle to obtain a vehicle cut image; and identifying the engineering vehicle according to the vehicle trimming image to obtain an engineering vehicle identification result.
Optionally, the vehicle identification module 33, when performing identification of the preset target vehicle on each frame of target image to obtain an identification result of the preset target vehicle, is specifically configured to:
inputting each frame of target image into a first vehicle identification model; and identifying the preset target vehicle through the first vehicle identification model, and outputting the identification result of the preset target vehicle through the first vehicle identification model.
Optionally, the vehicle identification module 33, when identifying the engineering vehicle according to the vehicle trimming image to obtain an engineering vehicle identification result, is specifically configured to:
inputting the vehicle trimming image into a second vehicle recognition model; and identifying the engineering vehicle through the second vehicle identification model, and outputting an engineering vehicle identification result through the second vehicle identification model.
Optionally, the image preprocessing module 41 is configured to convert each frame of target image into a grayscale image, and perform grayscale normalization on the grayscale image; and carrying out size adjustment processing on the target image subjected to the gray value normalization processing.
Optionally, the state determining module 34 is specifically configured to:
determining a lamp state of the engineering vehicle according to the image comprising the engineering vehicle; and determining the working state of the engineering vehicle according to the lamp state of the engineering vehicle.
Optionally, the state determining module 34, when determining the lamp state of the engineering vehicle according to the image including the engineering vehicle, is specifically configured to:
inputting an image including the engineering vehicle into a vehicle lamp identification model; and recognizing the lamp state of the engineering vehicle through the lamp recognition model, and outputting the lamp state of the engineering vehicle through the lamp recognition model.
Optionally, the state determining module 34, when determining the working state of the engineering vehicle according to the lamp state of the engineering vehicle, is specifically configured to:
if the lamp state of the engineering vehicle is determined to be a lighting state, determining that the working state of the engineering vehicle is a construction state; and if the vehicle state of the engineering vehicle is determined to be a light-off state, determining that the working state of the engineering vehicle is a suspected construction state.
Optionally, the information sending module 42 is configured to send warning information to the user terminal, where the warning information includes identification information of the target video, and the warning information is used to indicate a user to determine the working state of the engineering vehicle according to the target video.
The engineering vehicle state identification device provided in this embodiment may implement the technical solutions of the method embodiments shown in fig. 4 to fig. 7, and the implementation principles and technical effects thereof are similar and will not be described herein again.
EXAMPLE five
Fig. 10 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention, and as shown in fig. 10, the electronic device 50 includes: a memory 51, a processor 52 and a computer program.
The computer program is stored in the memory 51 and configured to be executed by the processor 52 to implement the engineering vehicle state identification method provided in the first embodiment or the second embodiment of the present invention. The related description and the related effects corresponding to fig. 1 to fig. 7 can be understood, and are not described herein again.
In the present embodiment, the memory 51 and the processor 52 are connected by a bus 53.
EXAMPLE six
The sixth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for identifying a state of an engineering vehicle according to the first embodiment or the second embodiment of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for recognizing the state of an engineering vehicle is characterized by comprising the following steps:
acquiring a target video acquired by a target area vision sensor;
determining a road scene corresponding to the target area according to the target video;
if the road scene is a construction scene, identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result;
and determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result.
2. The method of claim 1, wherein the determining the road scene corresponding to the target area according to the target video comprises:
acquiring each frame of target image in the target video;
identifying the construction element object in each frame of target image to obtain a construction element object identification result;
determining the number of the construction element objects according to the construction element object identification result;
if the number of the construction element objects is larger than or equal to a preset number threshold value, determining that a road scene corresponding to the target area is a construction scene;
and if the number of the construction element objects is smaller than the preset number threshold, determining that the road scene corresponding to the target area is a non-construction scene.
3. The method of claim 2, wherein the identifying the construction element object in each frame of the target image to obtain a construction element object identification result comprises:
inputting each frame of target image into a first object recognition model, recognizing the construction element object through the first object recognition model, and outputting an initial object recognition result through the first object recognition model;
acquiring an image of the initial object identification result, which comprises a construction element object, and cutting the image of the construction element object to obtain an object cut image;
inputting the object trimming image into a second object recognition model, recognizing the construction element object again through the second object recognition model, and outputting a final object recognition result through the second object recognition model;
and determining the final object identification result as the construction element object identification result.
4. The method according to any one of claims 1-3, wherein the identifying the engineering vehicle according to the target video to obtain an engineering vehicle identification result comprises:
acquiring each frame of target image in the target video;
recognizing a preset target vehicle for each frame of target image to obtain a recognition result of the preset target vehicle;
acquiring an image including a preset target vehicle in the recognition result of the preset target vehicle, and cutting the image including the preset target vehicle to obtain a vehicle cut image;
and identifying the engineering vehicle according to the vehicle trimming image to obtain an engineering vehicle identification result.
5. The method according to claim 4, wherein the identifying the preset target vehicle for each frame of target image to obtain the identification result of the preset target vehicle comprises:
inputting each frame of target image into a first vehicle identification model;
and identifying a preset target vehicle through the first vehicle identification model, and outputting an identification result of the preset target vehicle through the first vehicle identification model.
6. The method of claim 4, wherein identifying the engineering vehicle according to the vehicle trimming image to obtain an engineering vehicle identification result comprises:
inputting the vehicle cutout image into a second vehicle identification model;
and identifying the engineering vehicle through the second vehicle identification model, and outputting an engineering vehicle identification result through the second vehicle identification model.
7. The method according to claim 1, wherein the determining the working state of the engineering vehicle according to the image of the engineering vehicle included in the engineering vehicle identification result comprises:
determining the lamp state of the engineering vehicle according to the image comprising the engineering vehicle;
and determining the working state of the engineering vehicle according to the lamp state of the engineering vehicle.
8. The method of claim 7, wherein said determining a state of a lamp of the work vehicle from the image comprising the work vehicle comprises:
inputting the image including the engineering vehicle into a vehicle lamp identification model;
and recognizing the lamp state of the engineering vehicle through the lamp recognition model, and outputting the lamp state of the engineering vehicle through the lamp recognition model.
9. An electronic device, comprising:
a memory, a processor, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, which computer program is executable by a processor to implement the method according to any one of claims 1-8.
CN202010821021.5A 2020-08-14 2020-08-14 Method, device and equipment for identifying state of engineering vehicle and storage medium Pending CN111967377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010821021.5A CN111967377A (en) 2020-08-14 2020-08-14 Method, device and equipment for identifying state of engineering vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010821021.5A CN111967377A (en) 2020-08-14 2020-08-14 Method, device and equipment for identifying state of engineering vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN111967377A true CN111967377A (en) 2020-11-20

Family

ID=73388892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010821021.5A Pending CN111967377A (en) 2020-08-14 2020-08-14 Method, device and equipment for identifying state of engineering vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN111967377A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158725A (en) * 2020-12-29 2021-07-23 神思电子技术股份有限公司 Comprehensive engineering vehicle construction action judgment method
CN113192340A (en) * 2021-03-26 2021-07-30 北京中交兴路信息科技有限公司 Method, device, equipment and storage medium for identifying highway construction vehicles
CN114495509A (en) * 2022-04-08 2022-05-13 四川九通智路科技有限公司 Method for monitoring tunnel running state based on deep neural network
CN116311015A (en) * 2021-12-21 2023-06-23 北京嘀嘀无限科技发展有限公司 Road scene recognition method, device, server, storage medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894701A (en) * 2016-04-05 2016-08-24 江苏电力信息技术有限公司 Large construction vehicle identification and alarm method for preventing external damage to transmission lines
CN107679495A (en) * 2017-10-09 2018-02-09 济南大学 A kind of detection method of transmission line of electricity periphery activity engineering truck
CN110942038A (en) * 2019-11-29 2020-03-31 腾讯科技(深圳)有限公司 Traffic scene recognition method, device, medium and electronic equipment based on vision
CN111144179A (en) * 2018-11-06 2020-05-12 富士通株式会社 Scene detection device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894701A (en) * 2016-04-05 2016-08-24 江苏电力信息技术有限公司 Large construction vehicle identification and alarm method for preventing external damage to transmission lines
CN107679495A (en) * 2017-10-09 2018-02-09 济南大学 A kind of detection method of transmission line of electricity periphery activity engineering truck
CN111144179A (en) * 2018-11-06 2020-05-12 富士通株式会社 Scene detection device and method
CN110942038A (en) * 2019-11-29 2020-03-31 腾讯科技(深圳)有限公司 Traffic scene recognition method, device, medium and electronic equipment based on vision

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113158725A (en) * 2020-12-29 2021-07-23 神思电子技术股份有限公司 Comprehensive engineering vehicle construction action judgment method
CN113158725B (en) * 2020-12-29 2022-02-08 神思电子技术股份有限公司 Comprehensive engineering vehicle construction action judgment method
CN113192340A (en) * 2021-03-26 2021-07-30 北京中交兴路信息科技有限公司 Method, device, equipment and storage medium for identifying highway construction vehicles
CN113192340B (en) * 2021-03-26 2022-09-20 北京中交兴路信息科技有限公司 Method, device, equipment and storage medium for identifying highway construction vehicles
CN116311015A (en) * 2021-12-21 2023-06-23 北京嘀嘀无限科技发展有限公司 Road scene recognition method, device, server, storage medium and program product
CN114495509A (en) * 2022-04-08 2022-05-13 四川九通智路科技有限公司 Method for monitoring tunnel running state based on deep neural network
CN114495509B (en) * 2022-04-08 2022-07-12 四川九通智路科技有限公司 Method for monitoring tunnel running state based on deep neural network

Similar Documents

Publication Publication Date Title
CN111967377A (en) Method, device and equipment for identifying state of engineering vehicle and storage medium
CN109284674B (en) Method and device for determining lane line
CN109711264B (en) Method and device for detecting occupation of bus lane
CN106600977B (en) Multi-feature recognition-based illegal parking detection method and system
JP5223675B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
CN112163543A (en) Method and system for detecting illegal lane occupation of vehicle
US10824885B2 (en) Method and apparatus for detecting braking behavior of front vehicle of autonomous vehicle
JP2002083297A (en) Object recognition method and object recognition device
EP2026313A1 (en) A method and a system for the recognition of traffic signs with supplementary panels
CN114565895B (en) Security monitoring system and method based on intelligent society
CN111627215A (en) Video image identification method based on artificial intelligence and related equipment
US11250279B2 (en) Generative adversarial network models for small roadway object detection
US11482012B2 (en) Method for driving assistance and mobile device using the method
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111785050A (en) Expressway fatigue driving early warning device and method
JP2007060273A (en) Environment recognition device
CN112528924A (en) Vehicle turning detection method, device, equipment and storage medium
CN111967384A (en) Vehicle information processing method, device, equipment and computer readable storage medium
JP5115390B2 (en) Mobile object identification device, computer program, and learning method of mobile object identification device
Carrell et al. Identification of driver phone usage violations via state-of-the-art object detection with tracking
CN111277956A (en) Method and device for collecting vehicle blind area information
CN113076852A (en) Vehicle-mounted snapshot processing system occupying bus lane based on 5G communication
CN112818814A (en) Intrusion detection method and device, electronic equipment and computer readable storage medium
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN114283361A (en) Method and apparatus for determining status information, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination