CN113989705A - Method, apparatus, device and storage medium for outputting information - Google Patents

Method, apparatus, device and storage medium for outputting information Download PDF

Info

Publication number
CN113989705A
CN113989705A CN202111238283.XA CN202111238283A CN113989705A CN 113989705 A CN113989705 A CN 113989705A CN 202111238283 A CN202111238283 A CN 202111238283A CN 113989705 A CN113989705 A CN 113989705A
Authority
CN
China
Prior art keywords
information
vehicle
determining
abnormally stopped
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111238283.XA
Other languages
Chinese (zh)
Inventor
吴东峰
陈明智
孙梅媚
吕前
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202111238283.XA priority Critical patent/CN113989705A/en
Publication of CN113989705A publication Critical patent/CN113989705A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a method, apparatus, device and storage medium for outputting information, and relates to the technical field of intelligent transportation, computer vision and deep learning. The specific implementation scheme is as follows: acquiring an image stream; determining information of abnormally stopped vehicles in the target area according to the image flow; responding to the fact that the information of the abnormally stopped vehicle meets the preset conditions, detecting pedestrians around the abnormally stopped vehicle, and obtaining pedestrian information; determining whether an accident occurs in the target area based on the information of the abnormally stopped vehicles and the pedestrian information; and outputting the determined information. The implementation mode can predict the vehicle accident and improve the accident processing efficiency.

Description

Method, apparatus, device and storage medium for outputting information
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of intelligent transportation, computer vision, and deep learning technologies, and in particular, to a method, an apparatus, a device, and a storage medium for outputting information.
Background
With the development of social economy and the improvement of the living standard of residents, the quantity of the domestic automobiles is continuously improved, but at the same time, a series of problems such as environmental pollution, frequent traffic accidents and the like are brought.
At present, intelligent traffic is one of the key national development strategies, and the key contents of intelligent traffic are how to quickly detect traffic accidents occurring on roads, process the accidents and rescue and cure people.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, and storage medium for outputting information.
According to a first aspect, there is provided a method for outputting information, comprising: acquiring an image stream; determining information of abnormally stopped vehicles in the target area according to the image flow; responding to the fact that the information of the abnormally stopped vehicle meets the preset conditions, detecting pedestrians around the abnormally stopped vehicle, and obtaining pedestrian information; determining whether an accident occurs in the target area based on the information of the abnormally stopped vehicles and the pedestrian information; and outputting the determined information.
According to a second aspect, there is provided an apparatus for outputting information, comprising: an image stream acquisition unit configured to acquire an image stream; a vehicle information determination unit configured to determine information of an abnormally stopped vehicle within the target area, based on the image stream; a pedestrian information determination unit configured to detect pedestrians around an abnormally stopped vehicle, resulting in pedestrian information, in response to determining that the information of the abnormally stopped vehicle satisfies a preset condition; an accident prediction unit configured to determine whether an accident occurs within a target area based on information of abnormally stopped vehicles and pedestrian information; an information output unit configured to output the determined information.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described in the first aspect.
According to a fifth aspect, a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the first aspect.
According to the technology disclosed by the invention, the vehicle accident can be predicted, and the accident processing efficiency is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, according to the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for outputting information according to the present disclosure;
FIG. 4 is a flow diagram of another embodiment of a method for outputting information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method for outputting information according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the disclosed method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, the system architecture 100 may include a monitoring device 101, terminal devices 102, 103, a network 104, and a server 105. The network 104 serves to provide a medium for communication links between the monitoring device 101, the terminal devices 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The monitoring device 101 may continuously capture a video of the target area, and send the captured video to the terminal devices 102 and 103 or the server 105 through the network 104, so that the terminal devices 102 and 103 or the server 105 process the video.
The user may use the terminal devices 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. For example, various trained models are obtained from the server 105 for accident prediction or for vehicle detection, etc. Various communication client applications, such as an image processing application, may be installed on the terminal devices 102 and 103.
The terminal devices 102 and 103 may be hardware or software. When the terminal devices 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, in-car computers, laptop portable computers, desktop computers, and the like. When the terminal devices 102 and 103 are software, they can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing various image processing models for the terminal devices 102, 103 or a background server providing a processing server for the video captured by the monitoring device 101.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the method for outputting information provided by the embodiment of the present disclosure may be executed by the terminal devices 102 and 103, or may be executed by the server 105. Accordingly, the means for outputting information may be provided in the terminal devices 102, 103, or in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present disclosure is shown. The method for outputting information of the embodiment comprises the following steps:
step 201, an image stream is acquired.
In this embodiment, the execution subject of the method for outputting information may acquire an image stream in various ways. Here, the image stream may refer to a plurality of images whose acquisition time is different by a range smaller than a preset threshold. The image stream may be extracted from the video stream. Each image in the image stream may be for the same region, which may include multiple traveling vehicles.
Step 202, determining information of abnormally stopped vehicles in the target area according to the image stream.
In this embodiment, the execution subject may perform processing such as vehicle detection on each image in the image stream, and determine each vehicle in the target area. And judging whether the position of the vehicle changes or not according to the position of each vehicle in each image. The execution subject may regard a vehicle, the positions of which are not changed in succession, as the abnormally-stopped vehicle. Then, information of the abnormally stopped vehicle is further determined. The information may include the number of abnormally stopped vehicles, license plate numbers, vehicle sizes, vehicle colors, and the like.
And step 203, responding to the fact that the information of the abnormally stopped vehicle meets the preset condition, detecting the pedestrians around the abnormally stopped vehicle, and obtaining the pedestrian information.
After the execution main body obtains the information of the abnormally stopped vehicle, whether the information meets the preset condition or not can be judged, and if the information meets the preset condition, pedestrians around the vehicle can be further detected. If not, directly determining that no accident occurs at present, and not carrying out subsequent detection. Here, the preset condition may be a condition capable of determining whether or not the target area is congested on the road. If the number of abnormally stopped vehicles is greater than a preset threshold value or the number of abnormally stopped vehicles varies greatly over a period of time, it is considered that the possibility of road congestion is greater. Thus, the preset conditions here may include, but are not limited to: the number of the abnormally stopped vehicles is smaller than the preset number, and the number variation range of the abnormally stopped vehicles is smaller than the preset threshold value.
In the pedestrian detection, the execution main body may perform human body detection using various human body detection algorithms, taking each detected human body as a pedestrian. The execution subject may determine pedestrian information through the above-described human detection algorithm. The above-mentioned pedestrian information may include the position of the pedestrian, the behavior of the pedestrian, and the like.
And step 204, determining whether an accident occurs in the target area or not based on the information of the abnormally stopped vehicles and the pedestrian information.
After obtaining the information of the abnormally stopped vehicle and the pedestrian information, the execution subject may calculate the score corresponding to the two pieces of information by combining with the weight and/or score calculation function respectively corresponding to the two pieces of information. And if the score is larger than a preset threshold value, the accident probability is considered to be high. Accordingly, if the score is less than or equal to the preset threshold, it is considered that the possibility of an accident is low. Alternatively, the executing agent may input the above two pieces of information into a pre-trained accident prediction model, and determine whether an accident occurs in the target area based on an output of the accident prediction model. The accident prediction model can be used for representing the information of the abnormally stopped vehicles and the corresponding relation between the pedestrian information and the accident prediction probability.
Step 205, outputting the determined information.
The execution subject may output the result of the determination. Specifically, if the execution subject predicts that the accident is more likely to occur, the information can be output to the traffic police department for further judgment and processing by the traffic police department.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for outputting information according to the present disclosure is shown. In the application scenario of fig. 3, a surveillance video is obtained by monitoring each vehicle traveling on a target road surface with a monitor 301. The monitor 301 may send the monitor video to the server 302, and the server 302 may perform the processing of steps 201 to 204 on the monitor video. Then, in the case that it is determined that the possibility of the current accident is high, the information of the vehicle is transmitted to the terminal device 302 for further verification by the traffic police department.
The method for outputting the information provided by the above embodiment of the present disclosure can predict the vehicle accident and improve the accident processing efficiency.
With continued reference to fig. 4, a flow 400 of another embodiment of a method for outputting information in accordance with the present disclosure is shown. As shown in fig. 4, the method of the present embodiment may include the following steps:
step 401, acquiring a video stream of monitoring acquisition of a target area; decoding the video stream, and performing frame extraction on the decoded video stream to determine an image stream.
In this embodiment, the execution subject may obtain the video stream collected by the execution subject from the monitoring place of the target area. Then, the execution body may decode the video stream to obtain a decoded video stream. Then, the decoded video stream is subjected to frame extraction to determine an image stream. In the frame extraction, one frame can be extracted from each preset number of video frames and added into the image stream.
Step 402, determining whether a preset labeling area is received; in response to receiving the preset labeling area, taking the preset labeling area as a target area; and in response to not receiving the preset labeling area, taking the area of each image in the image stream as a target area.
The execution subject may further determine whether a preset labeling area is received. The preset labeling area may be a part of the area monitored by the image stream, or may be an area in which a vehicle can travel in the monitored area. The preset labeling area may be an area defined by a technician according to an actual application scenario. If the execution subject receives the preset labeling area, the preset labeling area can be used as a target area. If the execution subject does not receive the preset labeling area, the area in each image in the image stream can be used as the target area. That is, if the technician does not specify an area, the executive may have all areas in the image as target areas.
Step 403, determining the position of each vehicle in the target area of each image in the image stream; determining a dwell time of each vehicle at a single location; taking the vehicle with the stay time longer than the preset time as an abnormal stay vehicle; information of the abnormally stopped vehicle is determined.
The execution subject may perform analysis processing on each image in the image stream to determine the position of each vehicle in the target area of each image, respectively. And counts the stay time of each vehicle at a single location. The execution subject may regard the vehicle having the stay time longer than the preset time period as the abnormally-staying vehicle. Specifically, the execution subject may confirm the position of the vehicle in each image according to the acquisition time of the image in the image stream. Then, statistics are performed for each vehicle, that is, the start stay time and the end stay time of each vehicle are counted. So that the length of stay of each vehicle at a single location can be obtained. And then further determines information of the abnormally stopped vehicle.
In some optional implementations of the embodiment, the execution subject may determine the information of the abnormally stopped vehicle by: determining a marking frame of an abnormally stopped vehicle; and determining the overlapping area between the labeling frames.
In this implementation, the execution subject may label the abnormally stopped vehicle by using a vehicle detection algorithm, and determine a label frame of each abnormally stopped vehicle. The execution main body can determine the overlapping area between the labeling frames according to the position of each labeling frame. The overlapping area can be determined according to the position and the size of each marking frame.
In some optional implementations of the embodiment, the execution subject may determine the information of the abnormally stopped vehicle by: and identifying the license plate number of the abnormally stopped vehicle, and determining the license plate number of the abnormally stopped vehicle.
In this implementation, the execution subject may perform license plate number recognition on the abnormally parked vehicle by using an image processing algorithm or a text recognition algorithm, and determine the license plate number of the abnormally parked vehicle.
Step 404, determining the number of abnormally stopped vehicles; signal lamp detection is carried out on each image in the image stream, and the state of a signal lamp is determined; and in response to determining that the number of the abnormally stopped vehicles is smaller than a first preset number and the variation range of the number of the abnormally stopped vehicles at the signal lamps in different states is smaller than a second preset number, determining that the information of the abnormally stopped vehicles meets a preset condition.
In the present embodiment, the information of the abnormally stopped vehicle may include the number of the abnormally stopped vehicles. The executive body can also perform signal lamp detection on each image in the image stream to determine the state of a signal lamp. Specifically, the executing entity may process each image in the image stream by using an algorithm such as erosion-dilation, and first determine the region where the signal lamp is located. The status of the signal lights is then determined. The status of the signal light may include a red light status, a green light status, and a yellow light status. The execution subject may first determine whether the number is smaller than a first preset number (for example, may be 10), and if so, may continue to determine that the number has a variation range smaller than a second preset number in different states of the traffic lights. Here, it is considered that in the case of a road congestion, a change of the traffic light may cause a part of the vehicles to travel, and a new vehicle may stay on the congested section in a subsequent time period. In order to avoid congestion being recognized as an accident, the influence of the signal lights on the state of stay of the vehicle is taken into account. When both the above two conditions are satisfied, it can be determined that the preset condition is currently satisfied, and subsequent judgment can be further performed.
And step 405, in response to the fact that the information of the abnormally stopped vehicle meets the preset conditions, detecting pedestrians around the abnormally stopped vehicle, and determining the position and the behavior of the pedestrians.
If the execution subject determines that the information of the abnormally stopped vehicle satisfies the preset condition, the pedestrian around the abnormally stopped vehicle can be further detected, and the position and the behavior of the pedestrian can be determined. Here, the periphery is understood as a circle defined by taking the center of the vehicle as a center and 1.5 vehicle bodies as radii as a vehicle peripheral region. In the case of pedestrian detection, the position and behavior of a pedestrian can be determined. The position of the pedestrian may be a relative position to the vehicle or may be a position in each image in the image stream. Pedestrian behavior may include telephoning behavior, vehicle winding behavior, and the like.
And step 406, determining whether an accident occurs in the target area according to the information of the abnormally stopped vehicles, the pedestrian information and the corresponding weight.
In this embodiment, the executing agent may multiply each item of the information of the abnormally stopped vehicle and the pedestrian information by the corresponding weight, respectively, to determine a final score. And comparing the score with a preset score threshold, and if the score is greater than the score threshold, determining that the accident probability is high. If the score is less than or equal to the score threshold, the probability of an accident is considered to be low. Specifically, the information on the abnormally stopped vehicle may include: the number Num of abnormally stopped vehicles, the abnormal stop Time, and the overlap area Scope of the abnormally stopped vehicles. The pedestrian information may include: the number of pedestrians Human and the pedestrian state Status. Each item of information may be given a weight, for example, the weights corresponding to the items are 0.3, 0.2, 0.15, and 0.05, respectively. The execution body may also determine coefficients for the terms, the determination of the coefficients being determined by the values of the terms. See in particular the following table:
Figure BDA0003318281660000081
the execution body may multiply the weight by the coefficient and add up to obtain the final possibility.
In some optional implementation manners of this embodiment, the executing entity may input the above information into a pre-trained accident prediction model to obtain an accident prediction probability. And if the accident prediction probability is greater than a preset threshold value, the possibility of accident occurrence is considered to be high.
Step 407, the determined information is output.
In some optional implementation manners of this embodiment, the execution subject may further perform face recognition on each image in the image stream to determine the identity of the pedestrian. And simultaneously, the identity of the pedestrian is sent to a traffic police department for the traffic police department to carry out subsequent processing.
The method for outputting the information provided by the embodiment of the disclosure can comprehensively consider various factors, and improve the accuracy and effect of judgment.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: an image stream acquisition unit 501, a vehicle information determination unit 502, a pedestrian information determination unit 503, an accident prediction unit 504, and an information output unit 505.
An image stream acquiring unit 501 configured to acquire an image stream.
A vehicle information determination unit 502 configured to determine information of abnormally stopped vehicles within the target area, based on the image stream.
A pedestrian information determination unit 503 configured to detect pedestrians around the abnormally stopped vehicle, resulting in pedestrian information, in response to determining that the information of the abnormally stopped vehicle satisfies a preset condition.
An accident prediction unit 504 configured to determine whether an accident occurs within the target area based on the information of the abnormally stopped vehicle and the pedestrian information.
An information output unit 505 configured to output the determined information.
In some optional implementations of this embodiment, the image stream acquiring unit 501 may be further configured to: acquiring a video stream collected by monitoring of a target area; decoding the video stream, and performing frame extraction on the decoded video stream to determine an image stream.
In some optional implementations of this embodiment, the apparatus 500 may further include a target region determining unit configured to: determining whether a preset labeling area is received; and in response to receiving the preset labeling area, taking the preset labeling area as a target area.
In some optional implementations of this embodiment, the target region determining unit is further configured to: and in response to not receiving the preset labeling area, taking the area of each image in the image stream as a target area.
In some optional implementations of the present embodiment, the vehicle information determination unit 502 may be further configured to: determining the position of each vehicle in the target area of each image in the image stream; determining a dwell time of each vehicle at a single location; taking the vehicle with the stay time longer than the preset time as an abnormal stay vehicle; information of the abnormally stopped vehicle is determined.
In some optional implementations of the present embodiment, the vehicle information determination unit 502 may be further configured to: determining a marking frame of an abnormally stopped vehicle; and determining the overlapping area between the labeling frames.
In some optional implementations of the present embodiment, the vehicle information determination unit 502 may be further configured to: and identifying the license plate number of the abnormally stopped vehicle, and determining the license plate number of the abnormally stopped vehicle.
In some optional implementations of this embodiment, the apparatus 500 may further include a condition determining unit configured to: determining the number of abnormally stopped vehicles; signal lamp detection is carried out on each image in the image stream, and the state of a signal lamp is determined; and in response to determining that the number of the abnormally stopped vehicles is smaller than a first preset number and the variation range of the number of the abnormally stopped vehicles at the signal lamps in different states is smaller than a second preset number, determining that the information of the abnormally stopped vehicles meets a preset condition.
In some optional implementations of the present embodiment, the pedestrian information determination unit 503 may be further configured to: and detecting the pedestrians around the abnormally stopped vehicle, and determining the positions and behaviors of the pedestrians.
In some optional implementations of the present embodiment, the incident prediction unit 504 may be further configured to: and determining whether an accident occurs in the target area according to the information of the abnormally stopped vehicles, the pedestrian information and the corresponding weight.
It should be understood that units 501 to 505, which are described in the apparatus 500 for outputting information, correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above for the method for outputting information are equally applicable to the apparatus 500 and the units included therein and will not be described again here.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device 600 that performs a method for outputting information according to an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a processor 601 that may perform various suitable actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a memory 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The processor 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An I/O interface (input/output interface) 605 is also connected to the bus 604.
Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a memory 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Processor 601 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 601 performs the various methods and processes described above, such as a method for outputting information. For example, in some embodiments, the method for outputting information may be implemented as a computer software program tangibly embodied in a machine-readable storage medium, such as memory 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the processor 601, one or more steps of the method for outputting information described above may be performed. Alternatively, in other embodiments, the processor 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method for outputting information.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. The program code described above may be packaged as a computer program product. These program code or computer program products may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor 601, causes the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable storage medium may be a machine-readable signal storage medium or a machine-readable storage medium. A machine-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions of the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (23)

1. A method for outputting information, comprising:
acquiring an image stream;
determining information of abnormally stopped vehicles in the target area according to the image stream;
responding to the fact that the information of the abnormally stopped vehicle meets the preset condition, detecting pedestrians around the abnormally stopped vehicle, and obtaining pedestrian information;
determining whether an accident occurs in the target area based on the information of the abnormally stopped vehicle and the pedestrian information;
and outputting the determined information.
2. The method of claim 1, wherein the acquiring the image stream comprises:
acquiring a video stream collected by monitoring of a target area;
and decoding the video stream, performing frame extraction on the decoded video stream, and determining the image stream.
3. The method of claim 1, wherein the method further comprises:
determining whether a preset labeling area is received;
and in response to receiving the preset labeling area, taking the preset labeling area as the target area.
4. The method of claim 3, wherein the method further comprises:
and in response to not receiving the preset labeling area, taking the area of each image in the image stream as the target area.
5. The method of claim 1, wherein said determining information of abnormally parked vehicles within a target area from the image stream comprises:
determining the position of each vehicle in the target area of each image in the image stream;
determining a dwell time of each vehicle at a single location;
taking the vehicle with the stay time longer than the preset time as an abnormal stay vehicle;
and determining the information of the abnormally stopped vehicle.
6. The method of any of claims 1-5, wherein the determining information of the abnormally-parked vehicle comprises:
determining a marking frame of the abnormally stopped vehicle;
and determining the overlapping area between the labeling frames.
7. The method of any of claims 1-5, wherein the determining information of the abnormally-parked vehicle comprises:
and identifying the license plate number of the abnormally stopped vehicle, and determining the license plate number of the abnormally stopped vehicle.
8. The method of any of claims 1-7, wherein the method further comprises:
determining the number of abnormally stopped vehicles;
signal lamp detection is carried out on each image in the image stream, and the state of a signal lamp is determined;
and in response to the fact that the number of the abnormally stopped vehicles is smaller than a first preset number and the number of the abnormally stopped vehicles is smaller than a second preset number in the time-varying range of the signal lamps in different states, determining that the information of the abnormally stopped vehicles meets a preset condition.
9. The method of claim 1, wherein the detecting pedestrians around the abnormally stopped vehicle to obtain pedestrian information comprises:
and detecting the pedestrians around the abnormally stopped vehicle, and determining the positions and behaviors of the pedestrians.
10. The method according to any one of claims 1 to 9, wherein the determining whether an accident has occurred in the target area based on the information on the abnormally stopped vehicle and the pedestrian information includes:
and determining whether an accident occurs in the target area according to the information of the abnormally stopped vehicles, the pedestrian information and the corresponding weight.
11. An apparatus for outputting information, comprising:
an image stream acquisition unit configured to acquire an image stream;
a vehicle information determination unit configured to determine information of an abnormally stopped vehicle within a target area, from the image stream;
a pedestrian information determination unit configured to detect pedestrians around the abnormally stopped vehicle, resulting in pedestrian information, in response to determining that the information of the abnormally stopped vehicle satisfies a preset condition;
an accident prediction unit configured to determine whether an accident occurs within the target area based on the information of the abnormally stopped vehicle and the pedestrian information;
an information output unit configured to output the determined information.
12. The apparatus of claim 11, wherein the image stream acquisition unit is further configured to:
acquiring a video stream collected by monitoring of a target area;
and decoding the video stream, performing frame extraction on the decoded video stream, and determining the image stream.
13. The apparatus of claim 11, wherein the apparatus further comprises a target area determination unit configured to:
determining whether a preset labeling area is received;
and in response to receiving the preset labeling area, taking the preset labeling area as the target area.
14. The apparatus of claim 13, wherein the target region determination unit is further configured to:
and in response to not receiving the preset labeling area, taking the area of each image in the image stream as the target area.
15. The apparatus of claim 11, wherein the vehicle information determination unit is further configured to:
determining the position of each vehicle in the target area of each image in the image stream;
determining a dwell time of each vehicle at a single location;
taking the vehicle with the stay time longer than the preset time as an abnormal stay vehicle;
and determining the information of the abnormally stopped vehicle.
16. The apparatus of any one of claims 11-15, wherein the vehicle information determination unit is further configured to:
determining a marking frame of the abnormally stopped vehicle;
and determining the overlapping area between the labeling frames.
17. The apparatus of any one of claims 11-15, wherein the vehicle information determination unit is further configured to:
and identifying the license plate number of the abnormally stopped vehicle, and determining the license plate number of the abnormally stopped vehicle.
18. The apparatus according to any one of claims 11-17, wherein the apparatus further comprises a condition determining unit configured to:
determining the number of abnormally stopped vehicles;
signal lamp detection is carried out on each image in the image stream, and the state of a signal lamp is determined;
and in response to the fact that the number of the abnormally stopped vehicles is smaller than a first preset number and the number of the abnormally stopped vehicles is smaller than a second preset number in the time-varying range of the signal lamps in different states, determining that the information of the abnormally stopped vehicles meets a preset condition.
19. The apparatus of claim 11, wherein the pedestrian information determination unit is further configured to:
and detecting the pedestrians around the abnormally stopped vehicle, and determining the positions and behaviors of the pedestrians.
20. The apparatus according to any one of claims 11-19, wherein the incident prediction unit is further configured to:
and determining whether an accident occurs in the target area according to the information of the abnormally stopped vehicles, the pedestrian information and the corresponding weight.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-10.
CN202111238283.XA 2021-10-25 2021-10-25 Method, apparatus, device and storage medium for outputting information Pending CN113989705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111238283.XA CN113989705A (en) 2021-10-25 2021-10-25 Method, apparatus, device and storage medium for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111238283.XA CN113989705A (en) 2021-10-25 2021-10-25 Method, apparatus, device and storage medium for outputting information

Publications (1)

Publication Number Publication Date
CN113989705A true CN113989705A (en) 2022-01-28

Family

ID=79740772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111238283.XA Pending CN113989705A (en) 2021-10-25 2021-10-25 Method, apparatus, device and storage medium for outputting information

Country Status (1)

Country Link
CN (1) CN113989705A (en)

Similar Documents

Publication Publication Date Title
CN112634611B (en) Method, device, equipment and storage medium for identifying road conditions
CN113240909A (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN113538963A (en) Method, apparatus, device and storage medium for outputting information
CN112818792A (en) Lane line detection method, lane line detection device, electronic device, and computer storage medium
CN115359471A (en) Image processing and joint detection model training method, device, equipment and storage medium
CN112329963A (en) Data processing method, data processing apparatus, storage medium, and electronic device
CN114677848A (en) Perception early warning system, method, device and computer program product
CN113052047B (en) Traffic event detection method, road side equipment, cloud control platform and system
CN113963186A (en) Training method of target detection model, target detection method and related device
CN113177497A (en) Visual model training method, vehicle identification method and device
CN115908816A (en) Accumulated water identification method, device, equipment and storage medium based on artificial intelligence
CN113989705A (en) Method, apparatus, device and storage medium for outputting information
CN115526837A (en) Abnormal driving detection method and device, electronic equipment and medium
CN114445710A (en) Image recognition method, image recognition device, electronic equipment and storage medium
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113869317A (en) License plate recognition method and device, electronic equipment and storage medium
CN114333417A (en) Vehicle safety detection method and device, electronic equipment and storage medium
CN113989300A (en) Lane line segmentation method and device, electronic equipment and storage medium
CN113326889A (en) Method and apparatus for training a model
CN113807209A (en) Parking space detection method and device, electronic equipment and storage medium
CN113947945B (en) Vehicle driving alarm method and device, electronic equipment and readable storage medium
CN112818972A (en) Method and device for detecting interest point image, electronic equipment and storage medium
CN114093170B (en) Generation method, system and device of annunciator control scheme and electronic equipment
CN115345247B (en) Model training and highway axle load determining method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination