CN114708542B - Image processing method, image processing apparatus, and computer-readable storage medium - Google Patents

Image processing method, image processing apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN114708542B
CN114708542B CN202210627013.6A CN202210627013A CN114708542B CN 114708542 B CN114708542 B CN 114708542B CN 202210627013 A CN202210627013 A CN 202210627013A CN 114708542 B CN114708542 B CN 114708542B
Authority
CN
China
Prior art keywords
target
event
image
target object
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210627013.6A
Other languages
Chinese (zh)
Other versions
CN114708542A (en
Inventor
梁桥
王维
夏循龙
邓兵
黄建强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Alibaba Cloud Feitian Information Technology Co ltd
Original Assignee
Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Damo Institute Hangzhou Technology Co Ltd filed Critical Alibaba Damo Institute Hangzhou Technology Co Ltd
Priority to CN202210627013.6A priority Critical patent/CN114708542B/en
Publication of CN114708542A publication Critical patent/CN114708542A/en
Application granted granted Critical
Publication of CN114708542B publication Critical patent/CN114708542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method, an image processing device and a computer readable storage medium. The method comprises the following steps: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result shows that an abnormal event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image; based on target characteristics, acquiring a historical snapshot image of a target object, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by historical end-side equipment of the target object on a historical path; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image. The invention solves the technical problem that the traffic incident target cannot be subjected to evidence collection with high recall rate and high precision in the related technology.

Description

Image processing method, image processing apparatus, and computer-readable storage medium
Technical Field
The present invention relates to the field of data processing, and in particular, to an image processing method, an image processing apparatus, and a computer-readable storage medium.
Background
The intelligent detection of abnormal events is more and more common along with the development of intelligent tools, for example, the detection of abnormal traffic events is used as the basic function of an intelligent traffic camera and an intelligent traffic system, and the invention is widely applied to road management, violation punishment and the like.
In the related art, traffic event targets in traffic events are directly proved by a gun-ball linkage or target tracking snapshot method. These two methods have the following problems: the characteristics of high erection position and far coverage area exist in gunlock, dome camera and other snapshot cameras in the traffic road, and this can lead to the traffic incident target evidence and have the problem that recall rate is low, the precision is low. That is, the related art has a problem that the traffic event target cannot be obtained with high recall rate and high accuracy.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a computer-readable storage medium, which at least solve the technical problem that the traffic event target cannot be subjected to evidence collection with high recall rate and high precision in the related technology.
According to an aspect of an embodiment of the present invention, there is provided an image processing method including: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot; determining a target object with the abnormal event, and acquiring target characteristics of the target object based on the target snapshot image; acquiring a historical snapshot image of the target object based on the target feature, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; determining an event image of the target object based on the target snap-shot image and the history snap-shot image.
Optionally, the acquiring a target feature of the target object based on the target snapshot image includes: extracting object features of the target object from the target snapshot image; determining scene characteristics of the target object based on the snapshot time of the target snapshot image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
Optionally, the obtaining of the historical snap-shot image of the target object based on the target feature includes: determining a plurality of candidate snap-shots of the target object on the historical path based on the target feature; and screening out the historical snap-shot images of the target object from the plurality of candidate snap-shot images.
Optionally, the filtering out the historical snap-shots of the target object from the plurality of candidate snap-shots includes: respectively determining confidence degrees of the candidate snap-shot images based on a time continuity constraint condition and a space continuity constraint condition; and screening out historical snap-shots of the target object from the candidate snap-shots based on the confidence degrees of the candidate snap-shots.
Optionally, the determining an event image of the target object based on the target snapshot image and the history snapshot image includes: respectively determining the definition of the target snapshot image and the definition of the historical snapshot image; and determining the target snapshot image with the definition larger than the preset definition and the historical snapshot image as the event image of the target object.
According to another aspect of the embodiments of the present invention, there is also provided an image processing method, including: receiving a target monitoring video acquired by target side equipment; identifying an abnormal event from the target monitoring video to obtain an event abnormal result, and snapshotting the target monitoring video to obtain a target snapshotted image; and sending the event abnormal result and the target snapshot image to cloud side equipment, wherein the event abnormal result and the target snapshot image are used for determining an event image of a target object by the cloud side equipment based on the target snapshot image and a historical snapshot image, the target object is an object with the abnormal event, the historical snapshot image is obtained based on the target characteristics of the target object, and the historical snapshot image is an image obtained by snapshot of a historical monitoring video acquired by the historical end side equipment of the target object on a historical path.
According to another aspect of the embodiments of the present invention, there is also provided an image processing method, including: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot, and the abnormal event comprises an abnormal traffic event; determining a target object with the abnormal event, and acquiring target characteristics of the target object based on the target snapshot image, wherein the target object comprises a target vehicle; acquiring a historical snapshot image of the target object based on the target feature, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; determining an event image of the target object based on the target snap-shot image and the history snap-shot image.
According to another aspect of the embodiments of the present invention, there is also provided an image processing method, including: the method comprises the steps that a target monitoring video collected by a target side device is sent to a side device; the side equipment identifies an abnormal event from the target monitoring video to obtain an event abnormal result, captures the event abnormal result from the target monitoring video to obtain a target capture image, and sends the event abnormal result and the target capture image to the cloud side equipment; wherein the abnormal event comprises an abnormal traffic event; the cloud side equipment determines a target object with the abnormal event, and acquires target characteristics of the target object based on the target snapshot image; the cloud side device acquires a historical snapshot image of the target object based on the target feature, and determines an event image of the target object based on the target snapshot image and the historical snapshot image, wherein the historical snapshot image is acquired based on a historical monitoring video snapshot acquired by a historical side device of the target object on a historical path.
Optionally, the side device is integrated on the target end-side device.
Optionally, the target-side device comprises an augmented reality, AR, device, and/or a virtual reality, VR, device, wherein the AR device and/or the VR device presents the target monitoring video based on a predetermined driver.
Optionally, the target end-side device, the side-side device, and the cloud-side device obtain the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video.
Optionally, the acquiring, by the target end-side device, the side device, and the cloud side device, the type of the abnormal event, and rendering the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video include: under the condition that the type of the abnormal event is a second event type, performing second rendering on the target monitoring video by the side equipment to obtain a second rendered video; under the condition that the type of the abnormal event is a third event type, performing third rendering on the target monitoring video by the cloud side equipment to obtain a third rendered video; wherein the amount of rendering data of the first rendering is smaller than the amount of rendering data of the second rendering, and the amount of rendering data of the second rendering is smaller than the amount of rendering data of the third rendering.
Optionally, the acquiring, by the target end-side device, the side device, and the cloud side device, the type of the abnormal event, and rendering the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video include: the target side device identifies the target monitoring video, performs fourth rendering on the target monitoring video under the condition that a first type event exists in the target monitoring video, obtains a fourth rendering video, and sends the fourth rendering video to the side device; the side equipment identifies the target monitoring video, performs fifth rendering on the fourth rendering video under the condition that a second type event exists in the target monitoring video, obtains a fifth rendering video, and sends the fifth rendering video to the cloud side equipment; the cloud side equipment identifies the target monitoring video, and performs sixth rendering on the fifth rendering video under the condition that a third type event exists in the target monitoring video, so as to obtain a sixth rendering video; wherein the rendering data amount of the fourth rendering is smaller than the rendering data amount of the fifth rendering, and the rendering data amount of the fifth rendering is smaller than the rendering data amount of the sixth rendering.
According to another aspect of the embodiments of the present invention, there is also provided an image processing apparatus including: the receiving module is used for receiving an event exception result and a target snapshot image, wherein the event exception result indicates that an exception event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot; the first determination module is used for determining a target object with the abnormal event and acquiring target characteristics of the target object based on the target snapshot image; the acquisition module is used for acquiring a historical snapshot image of the target object based on the target feature, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; a second determination module to determine an event image of the target object based on the target snap-shot image and the history snap-shot image.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute any one of the image processing methods described above.
According to another aspect of the embodiments of the present invention, there is also provided a computer device, including: a memory and a processor, the memory storing a computer program; the processor is configured to execute the computer program stored in the memory, and when the computer program runs, the processor is enabled to execute any one of the image processing methods.
In the optional embodiment, a target snapshot image is obtained by receiving an event abnormal result indicating that an abnormal event is identified based on a target monitoring video acquired by a target side device and performing snapshot based on the target monitoring video; determining a target object with an abnormal event, acquiring target characteristics of the target object based on a target snapshot image, and acquiring a historical snapshot image of the target object based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path based on the target characteristics; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image. And analyzing the target snapshot image acquired by the target side equipment and the historical snapshot image acquired by the historical side equipment of the target object on the historical path to acquire the event image of the target object. The method is not limited to a single follow-up shooting method for obtaining the event image of the target object in the local area, and can be used for obtaining the event image of the target object by combining local information and global information of all road sections, so that the recall rate and the precision of evidence obtaining of the target object with the abnormal event are improved, and the technical problem that the traffic event target cannot be obtained with high recall rate and high precision in the related technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal for implementing an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 3 is a flow diagram of another alternative image processing method according to an embodiment of the invention;
FIG. 4 is a flow diagram of yet another alternative image processing method according to an embodiment of the present invention;
FIG. 5 is a flow diagram of yet another alternative image processing method according to an embodiment of the invention;
FIG. 6 is a block diagram of an alternative image processing system according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an alternative end-side device distribution scenario in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram of a scenario of another alternative end-side device distribution according to an embodiment of the present invention;
FIG. 9 is a schematic view of an alternative scenario in which an end-side device captures images, according to an embodiment of the present invention;
FIG. 10 is a flow diagram of an alternative image processing method according to an embodiment of the invention;
FIG. 11 is a block diagram of another alternative image processing apparatus according to an embodiment of the present invention;
fig. 12 is a block diagram of a computer apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
the edge computing end is an open platform integrating network, computing, storage and application core capabilities and can provide near-end services for users, wherein the edge computing end is positioned on one side close to an object or a data source.
The cloud side end, namely the cloud, is a central node in the traditional cloud computing and is a control end of edge computing; the edge is the edge side of cloud computing and is divided into an infrastructure edge and an equipment edge; the terminal is terminal equipment such as a mobile phone, an intelligent household appliance, various sensors, a camera and the like.
The traffic incident, the abnormal incident under the traffic scene, including abnormal parking, non-motor vehicle intrusion into the motor vehicle lane, retrograde motion, etc.
Recall/recall refers to the ratio of the number of actual tests to the total number that should be tested.
The method comprises the steps of gun and ball linkage, forming a global picture of a monitored area through a gun camera with a wide view field, and controlling a ball machine with a small view field to output a local detail image in a linkage mode by taking the global picture obtained through the gun camera as a reference. Through the linkage of the gunlock and the ball machine, the aim of seeing the ball clearly is fulfilled, and the high-speed ball can be controlled more quickly and more purposefully by a system with people in a loop; for an unattended system, the gun and ball linkage controller can intelligently analyze the specific target and behavior situation of the image of the gunlock, automatically control the high-speed ball to track and monitor suspicious targets and areas, and further realize automatic intelligent monitoring.
An AR (Augmented Reality) device may supplement a real scene with a virtual scene, or interact with a real scene and a virtual scene. The AR device may restore human visual functions such as automatically recognizing a tracked object and 3D modeling a real scene around the tracked object.
The VR (Virtual Reality) device may construct a Virtual scene using the received data, and convert the constructed Virtual scene into a model that can be visually perceived.
And video rendering, wherein the video rendering process comprises a process of re-optimizing each frame of image, and the video rendering can be used for converting the received data into image frames through a computer program and combining the image frames into a video.
The amount of rendering data, which is generally the amount of data used to render objects in an image, may be determined by the number of rendering objects, i.e., the rendering number. For example, the number of renderings may reflect the clarity of the rendered image or video. For the same processing equipment, the smaller the rendering number, the faster the processing speed of the equipment is, and the more fuzzy the obtained image or video is; the slower the processing speed of the device is, the larger the number of renderings, the sharper the resulting image or video.
Example 1
There is also provided, in accordance with an embodiment of the present invention, a method embodiment of image processing, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the embodiment 1 of the present application can be executed in a mobile terminal, a computer terminal or a similar computing device. Fig. 1 shows a block diagram of a hardware configuration of a computer terminal for implementing an image processing method. As shown in fig. 1, the computer terminal 10 may include one or more processors (shown as 102a, 102b, … …, 102n, which may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), a memory 104 for storing data, and a transmission device for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10. As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 can be used for storing software programs and modules of application software, such as program instructions/data storage devices corresponding to the image processing method in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the image processing method of the application program. The memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
Under the above operating environment, the present application provides an image processing method as shown in fig. 2. Fig. 2 is a flowchart of an image processing method according to embodiment 1 of the present invention. Referring to fig. 2, the image processing method may include the steps of:
step S202, an event abnormal result and a target snapshot image are received, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video collected by target side equipment, and the target snapshot image is obtained based on the target monitoring video collected by the target side equipment through snapshot.
Step S204, determining a target object with an abnormal event, and acquiring target characteristics of the target object based on the target snapshot image.
Step S206, acquiring a historical snapshot image of the target object based on the target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path.
In step S208, an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
In the optional embodiment, a target snapshot image is obtained by receiving an event abnormal result indicating that an abnormal event is identified based on a target monitoring video acquired by a target side device and performing snapshot based on the target monitoring video; determining a target object with an abnormal event, acquiring target characteristics of the target object based on a target snapshot image, and acquiring a historical snapshot image of the target object based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path based on the target characteristics; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image. And analyzing the target snapshot image acquired by the target side equipment and the historical snapshot image acquired by the historical side equipment of the target object on the historical path to acquire the event image of the target object. The method is not limited to a single follow-up shooting method for obtaining the event image of the target object in the local area, and can be used for obtaining the event image of the target object by combining local information and global information of all road sections, so that the recall rate and the precision of evidence obtaining of the target object with the abnormal event are improved, and the technical problem that the traffic event target cannot be obtained with high recall rate and high precision in the related technology is solved.
In some optional embodiments, the execution subject of the image processing method may be a cloud-side device. The cloud-side device may be a distributed processing cluster with high parallel computing performance.
In some optional embodiments, the method for acquiring a target feature of a target object based on a target snapshot may include the steps of: extracting object characteristics of a target object from a target snapshot image; determining scene characteristics of the target object based on the snapshot time of the target snapshot image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
In this optional embodiment, the scene characteristics of the target object are determined based on the feature information including the snapshot time of the target snapshot image, the position information of the target object, and the predetermined target path, and the structured target features for the target object are generated based on the object features of the target object and the scene features including the feature information, so that the target features of the target object can be accurately acquired, and the recall rate and accuracy of forensics of the target object are improved. It should be noted that the object features may include a plurality of features, for example, any feature describing the target object, such as, when the target object is a vehicle, a license plate number of the vehicle, a model of the vehicle, a logo of the vehicle, a color of the vehicle, and the like. In addition, the scene characteristics of the target object may be characteristics related to the scene of the target object, the capturing time, the position information of the target object, and the predetermined target path are only examples, and other scene-related characteristics also belong to a part of the present application.
In some optional embodiments, the method for obtaining a history snapshot image of a target object based on a target feature may comprise the following steps: determining a plurality of candidate snap-shot images of the target object on the historical path based on the target characteristics; and screening out a historical snapshot image of the target object from the plurality of candidate snapshot images.
In the optional embodiment, the historical snap-shot images of the target object are acquired based on the candidate snap-shot images of the target object on the historical path, the event image of the target object is acquired by combining the global information of the whole road section, and the recall rate and accuracy of evidence acquisition of the target object with the abnormal event are improved. The processing mode of firstly determining the plurality of candidate snap-shot images and screening the historical snap-shot images from the plurality of candidate snap-shot images can be used for firstly roughly determining the range to which the historical snap-shot images belong and then screening based on the determined range, so that the efficiency of obtaining the historical snap-shot images is higher, and unnecessary or invalid snap-shot images are avoided being taken as the historical snap-shot images.
In some optional embodiments, the method for filtering out a history snapshot of a target object from a plurality of candidate snapshots may comprise the steps of: respectively determining confidence degrees of a plurality of candidate snap-shot images based on the time continuity constraint condition and the space continuity constraint condition; and screening out historical snap-shot images of the target object from the candidate snap-shot images based on the confidence degrees of the candidate snap-shot images.
In this optional embodiment, confidence degrees of a plurality of candidate snap-shot images are determined based on the temporal continuity constraint condition and the spatial continuity constraint condition, and a history snap-shot image of the target object is screened out from the plurality of candidate snap-shot images based on the confidence degrees. This is equivalent to performing confidence filtering on a plurality of candidate snap-shot images using a temporal continuity constraint condition and a spatial continuity constraint condition, and determining a history snap-shot image of the target object according to a result of the confidence filtering. Therefore, the probability of error evidence obtaining of the target object is reduced, and the recall rate of evidence obtaining of the target object with abnormal events is improved.
Optionally, for the above-mentioned time continuity constraint condition, a snapshot that obviously does not conform to the time continuity may be simply and quickly filtered from a plurality of candidate snapshots, for example, there is one snapshot in the plurality of candidate snapshots, and the corresponding time of the snapshot is obviously later than that of the target snapshot, so that it may be definitely determined that the snapshot is not conforming to the time continuity and needs to be filtered. In addition, for the above spatial continuity constraint condition, the captured images obviously not conforming to the spatial continuity can be simply and quickly filtered from the plurality of candidate captured images, for example, one captured image exists in the plurality of candidate captured images, and the space corresponding to the captured image is obviously different from the space of the target captured image, for example, the captured image is a captured image for the sky, and the target captured image is for the road, so that the captured image can be directly and quickly filtered. The temporal continuity constraint and the spatial continuity constraint are only examples, and the temporal continuity constraint and the spatial continuity constraint may be used to filter the captured images individually or in combination. Moreover, in addition to the above-mentioned time continuity constraint condition and spatial continuity constraint condition, other constraint conditions may also be adopted, for example, continuity of the captured images in content is used for filtering, for example, there is an obvious object identifier on each of the plurality of captured images and the target captured image, but there is no obvious object identifier on one captured image, so that the captured image can be directly filtered.
In some optional embodiments, determining an event image of the target object based on the target snap-shot image and the historical snap-shot image comprises: respectively determining the definition of a target snapshot image and the definition of a historical snapshot image; and determining the target snapshot image with the definition larger than the preset definition and the historical snapshot image as the event image of the target object.
In the optional embodiment, the event image of the target object is determined based on the definition, so that the availability of the time image of the target object is ensured, and the accuracy of evidence obtaining of the target object with the abnormal event is improved. The definition of the image may be expressed in various forms, for example, it may be determined based on pixels of the image, or it may be determined based on brightness of the image, or it may be determined based on saturation of the image.
Fig. 3 is a flowchart of another alternative image processing method according to an embodiment of the present invention, and referring to fig. 3, the image processing method may include the following steps:
step S302, receiving a target monitoring video collected by a target side device.
Step S304, identifying an abnormal event from the target monitoring video to obtain an event abnormal result, and capturing the target monitoring video to obtain a target capturing image.
Step S306, sending the event abnormal result and the target snapshot image to the cloud side equipment, wherein the event image of the target object is determined by the cloud side equipment based on the target snapshot image and the historical snapshot image, the target object is an object with an abnormal event, the historical snapshot image is obtained based on the target characteristics of the target object, and the historical snapshot image is an image obtained by snapshot of a historical monitoring video collected by the historical side equipment of the target object on a historical path.
In the optional embodiment, an event abnormal result is obtained by identifying an abnormal event from a target detection video acquired by a target side device, a target snapshot image is obtained by snapshot from the target detection video, the event abnormal result and the target snapshot image are sent to a cloud side device, and the cloud side device determines an event image of a target object based on the target snapshot image and a historical snapshot image. In the related technology, evidence obtaining of the traffic incident target is achieved through linkage of a plurality of ball machines, the method depends on accurate calibration among a plurality of cameras, early-stage configuration and later-stage operation and maintenance costs are high, the ball machines frequently rotate preset positions, accumulated errors are prone to being generated, target matching failure among the cameras is further caused, and the recall rate of the traffic incident target evidence obtaining is reduced. And the target side equipment and the cloud side equipment are combined to accurately acquire the event image of the target object, so that the recall rate and the precision of evidence collection of the target object are improved.
In some optional embodiments, the main body of execution of the image processing method may be a side device. The side device may be a computing processing device close to the end-side device, for example, a device that processes data of one or a few end-side devices, so as to facilitate timely processing of data of the corresponding end-side device. It should be noted that the side device may be integrated on the end-side device, or may be independent from the end-side device, wherein a computing device with stronger computing power may be provided in a form independent from the end-side device.
Fig. 4 is a flowchart of another alternative image processing method according to an embodiment of the present invention, and referring to fig. 4, the image processing method may include the following steps:
step S402, receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by a target side device, the target snapshot image is obtained based on the target monitoring video acquired by the target side device through snapshot, and the abnormal event comprises an abnormal traffic event;
step S404, determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image, wherein the target object comprises a target vehicle;
step S406, acquiring a historical snapshot image of the target object based on the target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path;
step S408, an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
In the optional embodiment, an event abnormal result indicating that an abnormal event including an abnormal traffic event is identified based on a target monitoring video collected by a target side device is received, and a target snapshot image is obtained based on the target monitoring video; determining a target object with an abnormal event, acquiring target characteristics of the target object based on a target snapshot image, and acquiring a historical snapshot image of the target object based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path based on the target characteristics; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image. And analyzing the target snapshot image acquired by the target side equipment and the historical snapshot image acquired by the historical side equipment of the target object on the historical path to acquire the event image of the target object. The method is not limited to a single follow-up shooting method for obtaining the event image of the target object in the local area, and can be used for obtaining the event image of the target object by combining local information and global information of all road sections, so that the recall rate and the precision of evidence obtaining of the target object with the abnormal event are improved, and the technical problem that the traffic event target cannot be obtained with high recall rate and high precision in the related technology is solved.
In some optional embodiments, an execution subject of the image processing method may be a cloud-side device, and the cloud-side device collects an event image for an abnormal traffic event.
Fig. 5 is a flowchart of still another alternative image processing method according to an embodiment of the present invention, and referring to fig. 5, the image processing method may include the following steps:
and step S502, the target monitoring video collected by the target side equipment is sent to the side equipment.
Step S504, the side equipment identifies an abnormal event from the target monitoring video to obtain an event abnormal result, captures the event abnormal event from the target monitoring video to obtain a target capture image, and sends the event abnormal result and the target capture image to the cloud side equipment.
Step S506, the cloud-side equipment determines the target object with the abnormal event, and acquires the target characteristics of the target object based on the target snapshot image.
Step S508, the cloud side device acquires a historical snap-shot image of the target object based on the target feature, and determines an event image of the target object based on the target snap-shot image and the historical snap-shot image, wherein the historical snap-shot image is obtained based on a historical monitoring video snap-shot by the historical side device of the target object on a historical path.
In this optional embodiment, the target end-side device acquires a target detection video, the side-side device acquires the target detection video acquired by the target end-side device, and acquires an event anomaly result and a target snapshot image based on the acquired target detection video, the cloud-side device acquires the event anomaly result and the target snapshot image sent by the side-side device, and acquires an event image of the target object based on the acquired event anomaly result and the target snapshot image. The method is not limited to a single follow-up shooting method for obtaining the event image of the target object in the local area, and can be used for obtaining the event image of the target object by combining local information and global information of all road sections, so that the recall rate and the precision of evidence obtaining of the target object with the abnormal event are improved, and the technical problem that the traffic event target cannot be obtained with high recall rate and high precision in the related technology is solved.
In some alternative embodiments, the side device is integrated on the target end-side device. The side equipment is independent equipment integrated on the target side equipment, and the independent edge calculation unit has stronger calculation performance, can bear more complex algorithms, and improves the recall rate and the precision of evidence obtaining of the target object with the abnormal event.
In some optional embodiments, the exceptional event comprises an exceptional traffic event. The method comprises the steps that a target end side device, a side device and a cloud side device are combined, an abnormal traffic event is identified from a target monitoring video, an abnormal traffic event result is obtained, a target object with the abnormal event is determined according to the abnormal traffic event result and a target snapshot image, target characteristics of the target object are obtained through the target snapshot image, historical snapshot images of the target object are obtained according to the target characteristics, and event images of the target object are determined according to the target snapshot image and the historical snapshot images obtained through snapshot based on the historical monitoring video, collected by the historical end side device of the target object, on a historical path. Thus, the detection of traffic events in the traffic domain can be achieved.
In some optional embodiments, the target-side device comprises an augmented reality AR device and/or a virtual reality VR device, wherein the AR device and/or the VR device presents the target monitoring video based on a predetermined driver. The AR device and the VR device may be a fixed device with a camera device, a mobile terminal device such as a mobile phone or a tablet computer, or a head-mounted display device. Through AR equipment and/or VR equipment, the user can intuitively acquire AR equipment and/or VR equipment display target monitoring video, the method is simple and easy to operate, and user experience is improved.
In some optional embodiments, the target end-side device, the side-side device, and the cloud-side device obtain the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video. Based on the rendering of the target monitoring video based on the type of the abnormal event, the obvious identification of different abnormal events can be realized, so that a user can pay attention to the key objects in the target monitoring video more easily. Therefore, after the target monitoring video is rendered, the watching experience of a user can be effectively improved.
In some optional embodiments, the obtaining, by the target end-side device, the side-side device, and the cloud-side device, the type of the abnormal event, and rendering the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video includes: under the condition that the type of the abnormal event is a second event type, performing second rendering on the target monitoring video by the side equipment to obtain a second rendered video; under the condition that the type of the abnormal event is a third event type, performing third rendering on the target monitoring video by the cloud side equipment to obtain a third rendered video; and the rendering data volume of the first rendering is smaller than that of the second rendering, and the rendering data volume of the second rendering is smaller than that of the third rendering. Rendering of the target surveillance video is performed by different devices for different types of exceptional events. For example, when a lightweight exceptional event with a relatively small rendering data amount is included in the target surveillance video, the processing resources of the end-side device are relatively limited with respect to the side device and the cloud-side device, and therefore the lightweight exceptional event may be rendered directly by the end-side device. For example, when the light abnormal event is an abnormal event of passing by an abnormal reason, the end-side device can simply render an illegal vehicle in the image with an illegal mark. For another example, when a medium-sized abnormal event with a relatively large rendering data amount is included in the target monitoring video, since the processing resources of the side device are higher than those of the end-side device, and the medium-sized abnormal event does not require a large amount of processing resources of the cloud-side device, the medium-sized abnormal event may be rendered directly by the side device. For example, when the medium-sized abnormal event is an abnormal event in which a rear-end collision occurs, the side device may render both vehicles in the rear-end collision and the damage level of the rear-end collision directly. For example, when a large abnormal event with a large rendering data amount is included in the target monitoring video, the large abnormal event is generally serious, and details related to the event are more and finer, so that the large abnormal event can be directly rendered by the cloud-side device with rich processing resources. For example, when the large abnormal event is an explosion event in which a vehicle has a severe impact, the cloud-side device may directly render details of the event scene.
In addition, for the same device, the larger the rendering data amount, the slower the processing speed of the device is, the higher the definition of the target monitoring video obtained by rendering is, and the smaller the rendering data amount, the faster the processing speed of the device is, and the lower the definition of the target monitoring video obtained by rendering is. Different renderings can be set for the target monitoring videos of different types of abnormal events according to requirements. For example, the requirement on the definition of the target surveillance video including an abnormal event of a certain event type is relatively high, the rendering data corresponding to the target surveillance video of the abnormal event of the event type may be set to be relatively high, the requirement on the definition of the target surveillance video of the abnormal event of the certain event type is relatively low, and the rendering data amount corresponding to the target surveillance video of the abnormal event of the event type may be set to be relatively low. Different rendering data volumes are set for the target monitoring videos of the abnormal events of different event types, so that the requirement of a user on the definition of the target monitoring videos of the abnormal events of different event types can be met, and the user experience is improved.
In the traffic field, the type of the abnormal event may be various, and for example, the abnormal event may include an abnormal parking in a traffic scene, a non-motor vehicle intruding into a motor vehicle lane, a reverse driving, and the like. In some optional embodiments, the user can set the rendering data volume of the monitoring video differently by different devices according to different requirements on the definition of the monitoring video of different traffic abnormal events, so that various personalized requirements can be effectively met.
In some optional embodiments, the obtaining, by the target end-side device, the side-side device, and the cloud-side device, the type of the abnormal event, and rendering the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video may include the following steps: the target side equipment identifies the target monitoring video, performs fourth rendering on the target monitoring video under the condition that the first type of event exists in the target monitoring video, obtains a fourth rendering video, and sends the fourth rendering video to the side equipment; the side equipment identifies the target monitoring video, performs fifth rendering on the fourth rendering video under the condition that the second type event exists in the target monitoring video, obtains a fifth rendering video, and sends the fifth rendering video to the cloud side equipment; the cloud side equipment identifies the target monitoring video, and performs sixth rendering on the fifth rendering video under the condition that the third type event exists in the target monitoring video, so as to obtain a sixth rendering video; and the rendering data volume of the fourth rendering is smaller than that of the fifth rendering, and the rendering data volume of the fifth rendering is smaller than that of the sixth rendering. The target monitoring videos of different abnormal events can be rendered layer by layer according to the attention degree of the user to the abnormal events of different event types. Therefore, the priority display of the abnormal events with high user attention can be ensured, and the user experience is improved. In addition, different rendering data volumes are set for the target monitoring videos of different event types, so that the requirement of a user on the definition of the target monitoring videos of abnormal events of different event types can be met, and the user experience is improved.
With the above, layer-by-layer rendering of the target surveillance video is performed by multiple devices for different types of abnormal events. For example, when the end-side device detects an abnormal event, that is, an abnormal event exists, the end-side device simply renders the target monitoring video by using the end-side processing resource, and then sends the rendered video (that is, the fourth rendered video) to the end-side device. For example, an area in which an abnormal event occurs in an event framed image is directly used, and a video of the framed area is sent to the side device. When the side device identifies that a serious abnormal event exists, re-rendering is performed on the video already rendered by the end side to obtain a re-rendered video (i.e., the fifth rendered video), for example, color filling, line drawing and the like are performed on an object with a relatively important event, and then the re-rendered video is sent to the cloud side device. After the cloud side equipment receives the rendered video sent by the side equipment, the abnormal event is identified to be a major abnormal event, so that the received video is rendered in detail by adopting abundant processing resources, and a direct and clear video image is obtained. For example, after recognizing the abnormal event of the above-mentioned severe impact, the cloud-side device enhances the texture of the object in the image, fills the background color, and highlights the description animation.
The event type may be determined according to various types of division criteria, for example, according to an abnormal degree of the event, whether there is an abnormal event, or an influence degree of the abnormal event.
Based on the above embodiments and alternative embodiments, an alternative implementation is provided, which is described in detail below.
In the related art, a preset area close to a camera is subjected to detail image capturing, so that evidence collection of a traffic incident target is realized, and the method has the following defects: the method is realized based on a single end-side device, strongly depends on the whole-process tracking of the vehicle target, and has low recall rate under the conditions of large traffic flow, shielding of the vehicle and the like and higher tracking difficulty. And if the vehicle is an incoming violation/accident vehicle which does not enter the snapshot area after stopping, the method will fail. The violation target can be monitored in a panoramic way through a single dome camera and tracked, and the evidence of the traffic incident target can be obtained, so that the method has the following problems: the method is realized based on a single end-side device, strongly depends on the whole-process tracking of the vehicle target, and has low recall rate under the conditions of large traffic flow, shielding of the vehicle and the like and higher tracking difficulty. And because the ball machine needs time for drawing the frame and rotating the preset position, the method has poor instantaneity, and can not successfully obtain evidence under the conditions that the violation target stays for a short time and moves again and the like. The method can also be used for controlling another dome camera to shoot the target position under the condition that one dome camera finds that the target is violated by calibrating the positioning parameters between the two dome cameras in advance, so that evidence obtaining of the traffic incident target is realized, and the method has the following problems: the method strongly depends on accurate calibration among a plurality of cameras, and the early-stage configuration and the later-stage operation and maintenance cost are high. Because the ball machine frequently rotates the preset position, accumulated errors are easily generated for a long time, the target matching between the two cameras fails, and the recall rate is reduced. In addition, due to the fact that time is needed for the ball machine to pull the frame and rotate the preset position, the method is poor in real-time performance, and the method is invalid under the conditions that the violation target stays for a short time and moves again and the like. In the related technology, target association can be carried out according to time and position information through two cameras with overlapped view fields, and the violation vehicles shot by a distant view camera are subjected to evidence obtaining and license plate information is obtained through a close view camera. This method has the following problems: the method relies on the repeated shooting of two cameras at the same position to form an overlapped view field, the equipment cost is high, the target association relies on accurate position calibration and time synchronization between the two cameras, and the early-stage configuration and the later-stage operation and maintenance cost are high. In addition, if the vehicle does not enter the coverage area of the close-range camera after violation/accident occurs, or close-range snapshot fails due to reasons such as shielding, the method is invalid.
In view of this, in the embodiments of the present disclosure, an image processing method is provided, which receives an event exception result indicating that an exception event is identified based on a target monitoring video acquired by a target end-side device, and captures a target captured image based on the target monitoring video; determining a target object with an abnormal event, acquiring target characteristics of the target object based on a target snapshot image, and acquiring a historical snapshot image of the target object based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path based on the target characteristics; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image. And analyzing the target snapshot image acquired by the target side equipment and the historical snapshot image acquired by the historical side equipment of the target object on the historical path to acquire the event image of the target object. The method is not limited to a single follow-up shooting method for obtaining the event image of the target object in the local area, and can be used for obtaining the event image of the target object by combining local information and global information of all road sections, so that the recall rate and the precision of evidence obtaining of the target object with the abnormal event are improved, and the technical problem that the traffic event target cannot be obtained with high recall rate and high precision in the related technology is solved.
Fig. 6 is a schematic structural diagram of an alternative image processing system according to an embodiment of the present invention, and referring to fig. 6, the alternative embodiment implements an image processing method based on an image processing system including an end-side device, an edge computing platform (corresponding to the end-side device in the foregoing embodiment), and a cloud platform (corresponding to the cloud-side device in the foregoing embodiment).
Fig. 7 is a schematic view of an alternative end-side device distribution according to an embodiment of the present invention, fig. 8 is a schematic view of another alternative end-side device distribution according to an embodiment of the present invention, and referring to fig. 7 and 8, a plurality of monitoring camera sets for providing basic video perception data are arranged in a traffic road, and cameras in the monitoring camera sets include, but are not limited to, perception devices such as a gun camera, a fisheye camera, a ball machine, a bayonet camera, and the like. The number of the monitoring camera groups is determined according to the road section monitoring coverage requirement, and an overlapping coverage area can exist between a plurality of monitoring camera groups or can not exist. In this alternative embodiment, it is preferable that there is no overlapping coverage area between the plurality of monitoring camera groups.
All monitoring cameras realize real-time monitoring on the covered area, and whether a traffic event happens or not is analyzed and a traffic event target (equivalent to a target object in the embodiment) is determined by combining an edge computing platform on the basis of video data acquired by a monitoring camera group; by multiplexing portal bayonet cameras or other types of high-definition bayonet cameras commonly erected in the highway, the traffic incident target can be captured and a high-definition captured image (equivalent to the target captured image in the embodiment) can be obtained when the traffic incident target reaches a short-distance preset area.
Fig. 9 is a schematic view of a scene of an image acquired by an alternative end-side device (corresponding to the monitoring camera group in the alternative embodiment) according to the embodiment of the present invention. The position marked by the solid line frame in fig. 9 is the position where the traffic event target is detected as an abnormal parking event, and the area enclosed by the dotted line frame in fig. 9 is a snapshot area of a certain monitoring camera group. As shown in fig. 9, the position of the traffic event target is far away from the monitoring camera set, so that the pixel occupied by the traffic event target is too low, and therefore, information such as a license plate of the traffic event target cannot be acquired, and the traffic event evidence collection cannot be realized. In this optional real-time mode, a snapshot area, as shown by the dashed box area in fig. 9, is set so that a target image whose pixels meet the forensic requirements can be acquired when a traffic event target passes through the area. As shown in fig. 8, if the traffic event target is located on the left side, the traffic event target does not enter the snapshot area of the monitoring camera set when an abnormal parking event occurs, so that the monitoring camera set cannot be directly associated with a clear close-range snapshot, and in this case, the scheme is to restore the historical path of the traffic event target, thereby associating with the snapshot data of the preamble monitoring camera set, and further implementing forensics of the traffic event target.
Continuing to refer to fig. 6, the edge computing platform includes a snapshot module and a traffic event detection module, which are used for preprocessing video data acquired by the end-side device, and after preprocessing the video data, the edge computing platform sends the preprocessed data to the cloud server. The snapshot module and the traffic event detection module are specifically described below. The data acquired by the end-side equipment and the data acquired by the edge computing platform after processing the data acquired by the end-side equipment are uniformly accessed to the cloud platform, so that the target evidence obtaining of the traffic incident is realized.
And the traffic event detection module is used for receiving the video data sent by the monitoring camera, identifying a human-vehicle target in the video through an artificial intelligence algorithm, and judging whether the video is an abnormal event or not according to information such as the position of the human-vehicle target. The common abnormal event types comprise abnormal parking, traffic accidents, motor vehicles running in the reverse direction, main truck lanes, emergency lanes occupied by the motor vehicles, diversion line areas occupied by the motor vehicles, lane changing with solid lines and the like. And after processing the video, the traffic event detection module outputs traffic event structured data comprising time, position, event type, target path, target picture, target-associated same-camera snapshot and the like.
And the snapshot module is used for receiving the video data sent by the monitoring camera, snapshotting each vehicle target in a preset area according to a snapshot algorithm, obtaining a relatively clearest snapshot under the monitoring camera, and attaching structural information such as target path, time and the like. All the snap-shot images and the attached structural information are sent to the cloud server by the module.
The processing functions implemented by the edge computing platform may be integrated in the end-side device, or implemented based on a separate edge computing platform. In this optional embodiment, preferably, the received video is processed by using an independent edge computing platform, and the independent edge computing platform has stronger computing performance and can bear more complex algorithms, so as to improve the recall rate and accuracy of the evidence obtaining of the traffic event target.
The cloud platform comprises a computing server in a central machine room and can access the full amount of structured data and screened unstructured data of the highway section. With reference to fig. 6, the cloud platform has functions of accessing traffic events, accessing or storing snapshot data of all road segments, extracting structural features of events or snapshot targets, and obtaining evidence of traffic event targets. Specifically, the cloud platform comprises a target structured feature extraction module, a snapshot database and an event target forensics module. This will be explained in detail below.
And the target structural feature extraction module is used for acquiring an event target and a snapshot target image transmitted by the edge computing platform, extracting structural target features including information such as vehicle body color, vehicle type, vehicle brand, license plate number and the like through an AI (artificial intelligence) algorithm, adding the originally attached information such as time, position, target path and the like in the structural target features, further generating final target structural data, and respectively outputting the target structural data to the snapshot database and the target evidence obtaining module for path association of the event target and the historical snapshot target.
And the snapshot database is used for receiving the full-road snapshot picture subjected to target structuralization and the attached structuralization characteristic information and performing real-time database falling processing. The snapshot database can support the extraction of required data through time, position, target structural characteristics and other information, and a target forensics module can acquire target historical information.
And the event target evidence obtaining module is accessed to the traffic event detection result of the edge computing platform after structured processing in real time and judges whether the traffic event is a traffic event needing evidence obtaining. If the traffic event needs to be proved, the following steps are carried out:
and fusing the event target image and the related structured features of the homographic snapshot image to form structured feature information for historical path association. Wherein the homographic snapshot comes from the event detection module.
Matching the snapshot data of the target in the preamble point in the snapshot database by using the fused event target structural feature information, and performing path association to obtain a series of multiple candidate snapshot data. All possible associated snapshot data can be screened from the snapshot database according to arrival time, driving paths under lanes or single cameras, vehicle body colors, vehicle types, vehicle brands, license plate numbers and the like.
And performing cross validation according to the preliminarily associated paths, calculating confidence degrees according to constraints of events and space continuity, and removing mismatching data possibly existing in the confidence degrees. If the time of a certain snapshot data is earlier than the time of the snapshot data of a plurality of subsequent camera groups, the confidence coefficient is low, and filtering is performed. If the license plate information of a certain snapshot data appears in the subsequent camera set of the camera set where the parking event target is located (the target after stopping should not appear in the snapshot data of the subsequent camera set), the confidence coefficient is low, and filtering is performed.
And evaluating the definition of the associated snapshot in the remaining high-confidence snapshot data, wherein one or more snapshots with the highest definition can be output as a forensics result by using various image definition evaluation methods including but not limited to pixel size, image gradient and the like.
The image processing method according to this alternative embodiment will be further described with reference to the above description. Fig. 10 is a flowchart of an alternative image processing method according to an embodiment of the present invention, and referring to fig. 10, the image processing method may include the following steps:
step S1001, a traffic incident detection result and structural features are obtained, and the process goes to step S1002.
And step S1002, judging whether evidence obtaining is needed, if so, entering step S1003, otherwise, ending the analysis.
In step S1003, the event target structured features are fused, and the process proceeds to step S1004.
In step S1004, the process proceeds to step S1005 by performing path correlation with the database.
In step S1005, candidate data confidence is selected, and the process advances to step S1006.
In step S1006, the candidate picture sharpness is evaluated, and the process advances to step S1007.
Step 1007, obtaining the evidence result of the traffic event target.
It should be appreciated that the above-described alternative embodiments may enable forensics of traffic event objectives as well as any vehicle.
In the above optional embodiment, the same traffic event target is tracked in all the road segments, and the high-definition snap shots of the traffic event target under each camera are directly obtained, so that the evidence obtaining of the traffic event target with high precision and high recall rate is realized. Through the end-side equipment, the edge computing platform and the cloud platform, evidence obtaining with high recall rate, high precision and high real-time performance on the traffic incident target is achieved, and the problem that evidence obtaining cannot be conducted on the traffic incident target with high recall rate and high precision in the related technology is solved. On the basis of the optional implementation mode, the gun and ball linkage is combined, and the drawing frame identification of the ball machine can be increased, so that the purpose of further improving the evidence obtaining effect is achieved.
In the optional embodiment, the database is established through the snapshot data of the whole road section, so that the high snapshot rate of the candidate traffic event target can be ensured, and the condition of missed recall caused by the phenomena of tracking failure, no entry of the traffic event target into a snapshot area and the like when a single camera set is used for tracking the traffic event target is avoided. The candidate target association is carried out by introducing rich structural feature information, so that the problems of low association robustness and high association failure rate caused by the fact that strict time and position retrieval is carried out after pre-calibration in the related technology are solved, and the high recall rate of traffic event target forensics is ensured. By using global information and rich structural characteristics as a basis and combining with space-time constraint to carry out confidence coefficient, the condition of evidence obtaining errors can be effectively reduced, and the recall rate of evidence obtaining is improved. Compared with the situation that tracking snapshot is realized through linkage of a plurality of ball machines in the related technology, evidence obtaining of the traffic incident target is realized through combination of the end-side equipment, the edge computing platform and the cloud platform, delay of more than second level can be realized, and the instantaneity of evidence obtaining is improved. The evidence obtaining of the traffic event target can be realized without overlapping coverage areas among a plurality of monitoring camera sets of the end-side equipment, and compared with a scheme for realizing the evidence obtaining of the target based on accurate multi-camera calibration in the related technology, the scheme reduces the coverage density of the cameras and has the advantage of low cost.
In the optional implementation mode, the evidence obtaining of the traffic incident target is realized through a system framework formed by combining the end-side equipment, the edge computing platform and the cloud platform, the local information acquired by the monitoring camera group in the end-side equipment and the global information of the whole road section can be combined, the evidence obtaining of the traffic incident target is further realized, and the traffic incident target evidence obtaining process is reduced based on the structured characteristic path, so that the scheme has the advantages of high recall rate, high accuracy, good real-time performance and low cost.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the image processing method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is further provided an apparatus for implementing the image processing method, and fig. 11 is a block diagram of an alternative image processing apparatus according to an embodiment of the present invention, as shown in fig. 11, the apparatus including: a receiving module 1102, a first determining module 1104, an obtaining module 1106, and a second determining module 1108. The following are described separately.
The receiving module 1102 is configured to receive an event exception result and a target snapshot image, where the event exception result indicates that an exception event is identified based on a target monitoring video acquired by a target end-side device, and the target snapshot image is obtained based on a target monitoring video acquired by the target end-side device; a first determining module 1104, connected to the receiving module 1102, for determining a target object with an abnormal event, and acquiring a target feature of the target object based on a target snapshot image; an obtaining module 1106, connected to the first determining module 1104, configured to obtain a historical snapshot image of the target object based on the target feature, where the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; a second determining module 1108, connected to the acquiring module 1106, is configured to determine an event image of the target object based on the target snapshot image and the history snapshot image.
It should be noted here that the receiving module 1102, the first determining module 1104, the obtaining module 1106 and the second determining module 1108 correspond to steps S202 to S208 in embodiment 1, and several modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
Example 3
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image; acquiring a historical snapshot image of a target object based on target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: extracting object features of a target object from a target snapshot image; determining scene characteristics of the target object based on the snapshot time of the target snapshot image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: based on the target characteristics, acquiring a historical snapshot of the target object, comprising: determining a plurality of candidate snap-shot images of the target object on the historical path based on the target characteristics; and screening out a historical snapshot image of the target object from the plurality of candidate snapshot images.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: screening out a historical snap-shot image of a target object from a plurality of candidate snap-shot images, and the method comprises the following steps: respectively determining confidence degrees of a plurality of candidate snap-shot images based on the time continuity constraint condition and the space continuity constraint condition; and screening out historical snap-shot images of the target object from the candidate snap-shot images based on the confidence degrees of the candidate snap-shot images.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: determining an event image of the target object based on the target snap-shot image and the historical snap-shot image, comprising: respectively determining the definition of a target snapshot image and the definition of a historical snapshot image; and determining the target snapshot image with the definition larger than the preset definition and the historical snapshot image as the event image of the target object.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: receiving a target monitoring video acquired by target side equipment; identifying an abnormal event from a target monitoring video to obtain an event abnormal result, and snapshotting the target monitoring video to obtain a target snapshotted image; and sending the event abnormal result and the target snapshot image to cloud side equipment, wherein the cloud side equipment determines an event image of a target object based on the target snapshot image and a historical snapshot image, the target object is an object with an abnormal event, the historical snapshot image is obtained based on target characteristics of the target object, and the historical snapshot image is an image obtained by snapshot of a historical monitoring video acquired by the historical side equipment of the target object on a historical path.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot, and the abnormal event comprises an abnormal traffic event; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image, wherein the target object comprises a target vehicle; acquiring a historical snapshot image of a target object based on the target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: the method comprises the steps that a target monitoring video collected by a target side device is sent to a side device; the method comprises the steps that an edge side device identifies an abnormal event from a target monitoring video to obtain an event abnormal result, a target snapshot image is obtained from the target monitoring video in a snapshot manner, and the event abnormal result and the target snapshot image are sent to a cloud side device; the cloud side equipment determines a target object with an abnormal event, and acquires target characteristics of the target object based on a target snapshot image; wherein the abnormal event comprises an abnormal traffic event; the cloud side device acquires a historical snapshot image of the target object based on the target characteristics, and determines an event image of the target object based on the target snapshot image and the historical snapshot image, wherein the historical snapshot image is acquired based on a historical monitoring video snapshot acquired by the historical side device of the target object on a historical path. The side device is integrated on the target end side device. The target-side device includes an augmented reality AR device, and/or a virtual reality VR device, wherein the AR device and/or the VR device present the target monitoring video based on a predetermined driver.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: the method comprises the steps that the target side device, the side device and the cloud side device acquire the type of an abnormal event, and render a target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: the method comprises the following steps that the target side equipment, the side equipment and the cloud side equipment acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, and comprises the following steps: under the condition that the type of the abnormal event is a second event type, performing second rendering on the target monitoring video by the side equipment to obtain a second rendered video; under the condition that the type of the abnormal event is a third event type, performing third rendering on the target monitoring video by the cloud side equipment to obtain a third rendered video; and the rendering data volume of the first rendering is smaller than that of the second rendering, and the rendering data volume of the second rendering is smaller than that of the third rendering.
Optionally, in this embodiment, the computer terminal may execute program codes of the following steps in the image processing method of the application program: the method comprises the following steps that the target side equipment, the side equipment and the cloud side equipment acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, and comprises the following steps: the target side equipment identifies the target monitoring video, performs fourth rendering on the target monitoring video under the condition that the first type of event exists in the target monitoring video, obtains a fourth rendering video, and sends the fourth rendering video to the side equipment; the side equipment identifies the target monitoring video, performs fifth rendering on the fourth rendering video under the condition that the second type event exists in the target monitoring video, obtains a fifth rendering video, and sends the fifth rendering video to the cloud side equipment; the cloud side equipment identifies the target monitoring video, and performs sixth rendering on the fifth rendering video under the condition that the third type event exists in the target monitoring video, so as to obtain a sixth rendering video; and the rendering data volume of the fourth rendering is smaller than that of the fifth rendering, and the rendering data volume of the fifth rendering is smaller than that of the sixth rendering.
Alternatively, fig. 12 is a block diagram of a computer device according to an embodiment of the present invention. As shown in fig. 12, the computer terminal may include: one or more processors 1202 (only one shown), memory 1204, and the like.
The memory 1204 can be used for storing software programs and modules, such as program instructions/modules corresponding to the image processing method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, implementing the image processing method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the computer terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 1202 may invoke the memory-stored information and the application program via the transmission means to perform the following steps: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image; acquiring a historical snapshot image of a target object based on target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
Optionally, the processor 1202 may further execute the program code of the following steps: the method for acquiring the target characteristics of the target object based on the target snapshot image comprises the following steps: extracting object characteristics of a target object from a target snapshot image; determining scene characteristics of the target object based on the snapshot time of the target snapshot image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
Optionally, the processor 1202 may further execute program code for: based on the target characteristics, acquiring a historical snapshot of the target object, comprising: determining a plurality of candidate snap-shot images of the target object on the historical path based on the target characteristics; and screening out a historical snapshot image of the target object from the plurality of candidate snapshot images.
Optionally, the processor 1202 may further execute program code for: screening out a historical snap-shot image of a target object from a plurality of candidate snap-shot images, and the method comprises the following steps: respectively determining confidence degrees of a plurality of candidate snap-shot images based on the time continuity constraint condition and the space continuity constraint condition; and screening out historical snap-shot images of the target object from the candidate snap-shot images based on the confidence degrees of the candidate snap-shot images.
Optionally, the processor 1202 may further execute program code for: determining an event image of the target object based on the target snap-shot image and the history snap-shot image, comprising: respectively determining the definition of the target snapshot image and the definition of the historical snapshot image; and determining the target snapshot image with the definition larger than the preset definition and the historical snapshot image as the event image of the target object.
Optionally, the processor 1202 may further execute program code for: receiving a target monitoring video acquired by target side equipment; identifying an abnormal event from a target monitoring video to obtain an event abnormal result, and snapshotting the target monitoring video to obtain a target snapshotted image; and sending the event abnormal result and the target snapshot image to cloud side equipment, wherein the cloud side equipment determines an event image of a target object based on the target snapshot image and a historical snapshot image, the target object is an object with an abnormal event, the historical snapshot image is obtained based on target characteristics of the target object, and the historical snapshot image is an image obtained by snapshot of a historical monitoring video acquired by the historical side equipment of the target object on a historical path.
Optionally, the processor 1202 may further execute the program code of the following steps: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot, and the abnormal event comprises an abnormal traffic event; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image, wherein the target object comprises a target vehicle; acquiring a historical snapshot image of a target object based on the target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; and determining an event image of the target object based on the target snapshot image and the historical snapshot image.
Optionally, the processor 1202 may further execute the program code of the following steps: the method comprises the steps that a target monitoring video collected by a target side device is sent to a side device; the method comprises the steps that an edge side device identifies an abnormal event from a target monitoring video to obtain an event abnormal result, a target snapshot image is obtained from the target monitoring video through snapshot, and the event abnormal result and the target snapshot image are sent to a cloud side device; the cloud side equipment determines a target object with an abnormal event, and acquires target characteristics of the target object based on a target snapshot image; wherein the abnormal event comprises an abnormal traffic event; the cloud side device acquires a historical snapshot image of the target object based on the target characteristics, and determines an event image of the target object based on the target snapshot image and the historical snapshot image, wherein the historical snapshot image is acquired based on a historical monitoring video snapshot acquired by the historical side device of the target object on a historical path. The side device is integrated on the target end side device. The target-side device includes an Augmented Reality (AR) device, and/or a Virtual Reality (VR) device, wherein the AR device and/or the VR device present the target monitoring video based on a predetermined driver.
Optionally, the processor 1202 may further execute program code for: the method comprises the steps that the target side device, the side device and the cloud side device acquire the type of an abnormal event, and render a target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video.
Optionally, the processor 1202 may further execute program code for: the method comprises the following steps that the target side equipment, the side equipment and the cloud side equipment acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, and comprises the following steps: under the condition that the type of the abnormal event is a second event type, performing second rendering on the target monitoring video by the side equipment to obtain a second rendered video; under the condition that the type of the abnormal event is a third event type, performing third rendering on the target monitoring video by the cloud side equipment to obtain a third rendered video; and the rendering data volume of the first rendering is smaller than that of the second rendering, and the rendering data volume of the second rendering is smaller than that of the third rendering.
Optionally, the processor 1202 may further execute program code for: the method comprises the following steps that the target side equipment, the side equipment and the cloud side equipment acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, and comprises the following steps: the target side equipment identifies the target monitoring video, performs fourth rendering on the target monitoring video under the condition that the first type of event exists in the target monitoring video, obtains a fourth rendering video, and sends the fourth rendering video to the side equipment; the side equipment identifies the target monitoring video, performs fifth rendering on the fourth rendering video under the condition that a second type event exists in the target monitoring video, obtains a fifth rendering video, and sends the fifth rendering video to the cloud side equipment; the cloud side equipment identifies the target monitoring video, and performs sixth rendering on the fifth rendering video under the condition that the third type event exists in the target monitoring video, so as to obtain a sixth rendering video; and the rendering data volume of the fourth rendering is smaller than that of the fifth rendering, and the rendering data volume of the fifth rendering is smaller than that of the sixth rendering.
It should be understood by those skilled in the art that the structure shown in the figure is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, and a Mobile Internet Device (MID), PAD, etc. Fig. 12 is a diagram illustrating a structure of the electronic device. For example, the computer terminal 12 may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 12, or have a different configuration than shown in FIG. 12.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the computer-readable storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present invention also provide a computer-readable storage medium. Alternatively, in this embodiment, the computer-readable storage medium may be used to store the program code executed by the image processing method provided in embodiment 1.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image; acquiring a historical snapshot image of a target object based on target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: the target characteristic of target object is obtained based on the target snapshot image, and the method comprises the following steps: extracting object characteristics of a target object from a target snapshot image; determining scene characteristics of the target object based on the snapshot time of the target snapshot image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: based on the target characteristics, acquiring a historical snapshot of the target object, comprising: determining a plurality of candidate snap-shot images of the target object on the historical path based on the target characteristics; and screening out a historical snapshot image of the target object from the plurality of candidate snapshot images.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: screening out a historical snap-shot image of a target object from a plurality of candidate snap-shot images, and the method comprises the following steps: respectively determining confidence degrees of a plurality of candidate snap-shot images based on the time continuity constraint condition and the space continuity constraint condition; and screening out historical snap-shot images of the target object from the candidate snap-shot images based on the confidence degrees of the candidate snap-shot images.
Optionally, in this embodiment, a computer-readable storage medium is configured to store program code for performing the steps of: determining an event image of the target object based on the target snap-shot image and the history snap-shot image, comprising: respectively determining the definition of a target snapshot image and the definition of a historical snapshot image; and determining the target snapshot image with the definition greater than the preset definition and the historical snapshot image as event images of the target object.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: receiving a target monitoring video acquired by target side equipment; identifying an abnormal event from a target monitoring video to obtain an event abnormal result, and snapshotting the target monitoring video to obtain a target snapshotted image; and sending the event abnormal result and the target snapshot image to cloud side equipment, wherein the cloud side equipment determines an event image of a target object based on the target snapshot image and a historical snapshot image, the target object is an object with an abnormal event, the historical snapshot image is obtained based on target characteristics of the target object, and the historical snapshot image is an image obtained by snapshot of a historical monitoring video acquired by the historical side equipment of the target object on a historical path.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot, and the abnormal event comprises an abnormal traffic event; determining a target object with an abnormal event, and acquiring target characteristics of the target object based on a target snapshot image, wherein the target object comprises a target vehicle; acquiring a historical snapshot image of a target object based on target characteristics, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path; an event image of the target object is determined based on the target snap-shot image and the history snap-shot image.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: target monitoring videos collected by the target side equipment are sent to the side equipment; the method comprises the steps that an edge side device identifies an abnormal event from a target monitoring video to obtain an event abnormal result, a target snapshot image is obtained from the target monitoring video through snapshot, and the event abnormal result and the target snapshot image are sent to a cloud side device; wherein the abnormal event comprises an abnormal traffic event; the cloud side equipment determines a target object with an abnormal event, and acquires target characteristics of the target object based on a target snapshot image; the cloud side equipment acquires a historical snap-shot image of the target object based on the target characteristics, and determines an event image of the target object based on the target snap-shot image and the historical snap-shot image, wherein the historical snap-shot image is obtained based on a historical monitoring video snap-shot of the target object on a historical path and collected by the historical side equipment. The side device is integrated on the target end side device. The target-side device includes an augmented reality AR device, and/or a virtual reality VR device, wherein the AR device and/or the VR device present the target monitoring video based on a predetermined driver.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: the method comprises the steps that the target side device, the side device and the cloud side device acquire the type of an abnormal event, and render a target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: the method comprises the following steps that the target side equipment, the side equipment and the cloud side equipment acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, and comprises the following steps: under the condition that the type of the abnormal event is a second event type, performing second rendering on the target monitoring video by the side equipment to obtain a second rendered video; under the condition that the type of the abnormal event is a third event type, performing third rendering on the target monitoring video by the cloud side equipment to obtain a third rendered video; and the rendering data volume of the first rendering is smaller than that of the second rendering, and the rendering data volume of the second rendering is smaller than that of the third rendering.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: the method comprises the following steps that the target side equipment, the side equipment and the cloud side equipment acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, and comprises the following steps: the target side equipment identifies the target monitoring video, performs fourth rendering on the target monitoring video under the condition that the first type of event exists in the target monitoring video, obtains a fourth rendering video, and sends the fourth rendering video to the side equipment; the side equipment identifies the target monitoring video, performs fifth rendering on the fourth rendering video under the condition that the second type event exists in the target monitoring video, obtains a fifth rendering video, and sends the fifth rendering video to the cloud side equipment; the cloud side equipment identifies the target monitoring video, and performs sixth rendering on the fifth rendering video under the condition that the third type event exists in the target monitoring video, so as to obtain a sixth rendering video; and the rendering data volume of the fourth rendering is smaller than that of the fifth rendering, and the rendering data volume of the fifth rendering is smaller than that of the sixth rendering.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
In the above embodiments of the present invention, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a computer-readable storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned computer-readable storage medium comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (15)

1. An image processing method, characterized by comprising:
receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot;
determining a target object with the abnormal event, and acquiring target characteristics of the target object based on the target snapshot image;
acquiring a historical snapshot image of the target object based on the target feature, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path;
determining an event image of the target object based on the target snap-shot image and the history snap-shot image;
wherein the acquiring of the target feature of the target object based on the target snapshot image includes: extracting object features of the target object from the target snapshot image; determining scene characteristics of the target object based on the capturing time of the target capturing image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
2. The method of claim 1, wherein the obtaining historical snap-shots of the target object based on the target feature comprises:
determining a plurality of candidate snap-shot images of the target object on the historical path based on the target feature;
and screening out the historical snap-shot images of the target object from the plurality of candidate snap-shot images.
3. The method of claim 2, wherein the filtering out the historical snap shots of the target object from the plurality of candidate snap shots comprises:
respectively determining confidence degrees of the candidate snap-shot images based on a time continuity constraint condition and a space continuity constraint condition;
and screening out historical snap-shot images of the target object from the candidate snap-shot images based on the confidence degrees of the candidate snap-shot images.
4. The method of claim 1, wherein determining an event image of the target object based on the target snap-shot image and the historical snap-shot image comprises:
respectively determining the definition of the target snapshot image and the definition of the historical snapshot image;
and determining the target snapshot image with the definition larger than the preset definition and the historical snapshot image as the event image of the target object.
5. An image processing method, characterized by comprising:
receiving a target monitoring video acquired by target side equipment;
identifying an abnormal event from the target monitoring video to obtain an event abnormal result, and capturing from the target monitoring video to obtain a target capturing image;
sending the event abnormal result and the target snapshot image to cloud side equipment, wherein the event abnormal result and the target snapshot image are used for determining an event image of a target object by the cloud side equipment based on the target snapshot image and a historical snapshot image, the target object is an object with the abnormal event, the historical snapshot image is obtained based on target characteristics of the target object, and the historical snapshot image is an image obtained by snapshot of a historical monitoring video acquired by the historical end side equipment of the target object on a historical path;
wherein the target feature of the target object is acquired based on the target snapshot image, including: extracting object features of the target object from the target snapshot image; determining scene characteristics of the target object based on the capturing time of the target capturing image, the position information of the target object and a preset target path of the target object; based on the object features and the scene features, structured target features for the target object are generated.
6. An image processing method, comprising:
receiving an event abnormal result and a target snapshot image, wherein the event abnormal result indicates that an abnormal event is identified based on a target monitoring video acquired by target side equipment, the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot, and the abnormal event comprises an abnormal traffic event;
determining a target object in which the abnormal event occurs, and acquiring a target feature of the target object based on the target snapshot image, wherein the acquiring the target feature of the target object based on the target snapshot image comprises: extracting object features of the target object from the target snapshot image; determining scene characteristics of the target object based on the capturing time of the target capturing image, the position information of the target object and a preset target path of the target object; generating a structured target feature for the target object based on the object feature and the scene feature;
acquiring a historical snapshot image of the target object based on the target feature, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path;
determining an event image of the target object based on the target snap-shot image and the history snap-shot image.
7. An image processing method, comprising:
the method comprises the steps that target side equipment collects a target monitoring video and sends the target monitoring video to side equipment;
the side equipment identifies an abnormal event from the target monitoring video to obtain an event abnormal result, captures the event abnormal result from the target monitoring video to obtain a target capture image, and sends the event abnormal result and the target capture image to the cloud side equipment; wherein the abnormal event comprises an abnormal traffic event;
the cloud-side equipment determines a target object with the abnormal event, and acquires a target feature of the target object based on the target snapshot image, wherein the acquiring the target feature of the target object based on the target snapshot image includes: extracting object features of the target object from the target snapshot image; determining scene characteristics of the target object based on the snapshot time of the target snapshot image, the position information of the target object and a preset target path of the target object; generating a structured target feature for the target object based on the object feature and the scene feature;
the cloud side device acquires a historical snapshot image of the target object based on the target feature, and determines an event image of the target object based on the target snapshot image and the historical snapshot image, wherein the historical snapshot image is acquired based on a historical monitoring video snapshot acquired by a historical side device of the target object on a historical path.
8. The method of claim 7, wherein the side device is integrated on the target end-side device.
9. The method of claim 7, wherein the target-side device comprises an Augmented Reality (AR) device, and/or a Virtual Reality (VR) device, wherein the AR device and/or the VR device presents the target monitoring video based on a predetermined driver.
10. The method of claim 9, further comprising: the target side device, the side device and the cloud side device all acquire the type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video.
11. The method according to claim 10, wherein the target end-side device, the side-side device, and the cloud-side device obtain a type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, including:
under the condition that the type of the abnormal event is a first event type, performing first rendering on the target monitoring video by the target side equipment to obtain a first rendering video;
under the condition that the type of the abnormal event is a second event type, performing second rendering on the target monitoring video by the side equipment to obtain a second rendering video;
under the condition that the type of the abnormal event is a third event type, performing third rendering on the target monitoring video by the cloud side equipment to obtain a third rendered video;
wherein the amount of rendering data of the first rendering is smaller than the amount of rendering data of the second rendering, which is smaller than the amount of rendering data of the third rendering;
wherein the first event type, the second event type, and the third event type are divided according to one of: the abnormal degree of the abnormal event, the existence of the abnormal event and the influence degree of the abnormal event.
12. The method according to claim 10, wherein the target end-side device, the side-side device, and the cloud-side device obtain a type of the abnormal event, and render the target monitoring video based on the type of the abnormal event to obtain a corresponding rendered video, including:
the target side device identifies the target monitoring video, performs fourth rendering on the target monitoring video under the condition that a first type event exists in the target monitoring video, obtains a fourth rendering video, and sends the fourth rendering video to the side device;
the side equipment identifies the target monitoring video, performs fifth rendering on the fourth rendering video under the condition that a second type event exists in the target monitoring video, obtains a fifth rendering video, and sends the fifth rendering video to the cloud side equipment;
the cloud side equipment identifies the target monitoring video, and performs sixth rendering on the fifth rendering video under the condition that a third type event exists in the target monitoring video, so as to obtain a sixth rendering video;
wherein the amount of rendering data of the fourth rendering is less than the amount of rendering data of the fifth rendering, which is less than the amount of rendering data of the sixth rendering;
wherein the first type event, the second type event and the third type event are obtained by dividing according to one of the following: the degree of abnormality of the abnormal event, the presence or absence of the abnormal event, and the degree of influence of the abnormal event.
13. An image processing apparatus characterized by comprising:
the receiving module is used for receiving an event exception result and a target snapshot image, wherein the event exception result indicates that an exception event is identified based on a target monitoring video acquired by target side equipment, and the target snapshot image is obtained based on the target monitoring video acquired by the target side equipment through snapshot;
a first determining module, configured to determine a target object in which the abnormal event occurs, and acquire a target feature of the target object based on the target snapshot image, where the acquiring the target feature of the target object based on the target snapshot image includes: extracting object features of the target object from the target snapshot image; determining scene characteristics of the target object based on the capturing time of the target capturing image, the position information of the target object and a preset target path of the target object; generating a structured target feature for the target object based on the object feature and the scene feature;
the acquisition module is used for acquiring a historical snapshot image of the target object based on the target feature, wherein the historical snapshot image is obtained based on a historical monitoring video snapshot acquired by a historical end-side device of the target object on a historical path;
a second determination module to determine an event image of the target object based on the target snap-shot image and the history snap-shot image.
14. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the image processing method according to any one of claims 1 to 12.
15. A computer device, comprising: a memory and a processor, wherein the processor is capable of,
the memory stores a computer program;
the processor for executing a computer program stored in the memory, the computer program when executed causing the processor to perform the image processing method of any one of claims 1 to 12.
CN202210627013.6A 2022-06-06 2022-06-06 Image processing method, image processing apparatus, and computer-readable storage medium Active CN114708542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210627013.6A CN114708542B (en) 2022-06-06 2022-06-06 Image processing method, image processing apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210627013.6A CN114708542B (en) 2022-06-06 2022-06-06 Image processing method, image processing apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN114708542A CN114708542A (en) 2022-07-05
CN114708542B true CN114708542B (en) 2022-09-02

Family

ID=82177981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210627013.6A Active CN114708542B (en) 2022-06-06 2022-06-06 Image processing method, image processing apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN114708542B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208183A (en) * 2013-04-03 2013-07-17 昆明联诚科技有限公司 Vehicle video data mining method for traffic violation evidence obtaining
CN112200077A (en) * 2020-04-15 2021-01-08 陈建 Artificial intelligent image processing method and system based on intelligent traffic
WO2021077766A1 (en) * 2019-10-24 2021-04-29 南京慧尔视智能科技有限公司 Large-area multi-target traffic event detection system and method
CN114120072A (en) * 2021-09-29 2022-03-01 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, computer-readable storage medium, and computer terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030035653A1 (en) * 2001-08-20 2003-02-20 Lyon Richard F. Storage and processing service network for unrendered image data
US20160104198A1 (en) * 2014-10-14 2016-04-14 Smith Luby Holdings, LLC Automobile incident data networking platform
CN112291520B (en) * 2020-10-26 2022-12-06 浙江大华技术股份有限公司 Abnormal event identification method and device, storage medium and electronic device
CN112258842A (en) * 2020-10-26 2021-01-22 北京百度网讯科技有限公司 Traffic monitoring method, device, equipment and storage medium
CN114581827A (en) * 2022-03-01 2022-06-03 西安西古光通信有限公司 Abnormal behavior early warning system, method, equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208183A (en) * 2013-04-03 2013-07-17 昆明联诚科技有限公司 Vehicle video data mining method for traffic violation evidence obtaining
WO2021077766A1 (en) * 2019-10-24 2021-04-29 南京慧尔视智能科技有限公司 Large-area multi-target traffic event detection system and method
CN112200077A (en) * 2020-04-15 2021-01-08 陈建 Artificial intelligent image processing method and system based on intelligent traffic
CN114120072A (en) * 2021-09-29 2022-03-01 阿里巴巴达摩院(杭州)科技有限公司 Image processing method, computer-readable storage medium, and computer terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A comprehensive survey on digital video forensics: Taxonomy, challenges,and future directions;Abdul Rehman Javed etal.;《Engineering Applications of Artificial Intelligence》;20210920;全文 *
一种基于事件检测的视频取证方法;王威等;《计算机应用研究》;20090531;第26卷(第5期);全文 *

Also Published As

Publication number Publication date
CN114708542A (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN104200671B (en) A kind of virtual bayonet socket management method based on large data platform and system
CN105849790B (en) Road condition information acquisition method
KR101496390B1 (en) System for Vehicle Number Detection
CN103359020A (en) Motorcycle driving training or examination monitoring method and system
US10277888B2 (en) Depth triggered event feature
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN111340710A (en) Method and system for acquiring vehicle information based on image stitching
CN105303826A (en) Violating side parking evidence obtaining device and method
CN105070065A (en) Mobile intelligent traffic gate system and method adopting same to monitor vehicles
CN105303827A (en) Traffic violation image obtaining device and method
CN112738394B (en) Linkage method and device of radar and camera equipment and storage medium
CN115004273A (en) Digital reconstruction method, device and system for traffic road
CN214338041U (en) Intelligent city monitoring system based on 5G Internet of things
CN112601049B (en) Video monitoring method and device, computer equipment and storage medium
CN113869258A (en) Traffic incident detection method and device, electronic equipment and readable storage medium
CN114708542B (en) Image processing method, image processing apparatus, and computer-readable storage medium
CN111429723B (en) Communication and perception data fusion method based on road side equipment
CN110855947B (en) Image snapshot processing method and device
TWI542194B (en) Three-dimensional image processing system, apparatus and method for the same
CN103595958A (en) Video tracking analysis method and system
CN115981219A (en) Intelligent monitoring system for high-speed tunnel
CN105303825A (en) Violating inclined side parking evidence obtaining device and method
CN114120642B (en) Road traffic flow three-dimensional reconstruction method, computer equipment and storage medium
CN115063969A (en) Data processing method, device, medium, roadside cooperative device and system
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240206

Address after: Room 553, 5th Floor, Building 3, No. 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province, 311121

Patentee after: Hangzhou Alibaba Cloud Feitian Information Technology Co.,Ltd.

Country or region after: China

Address before: 310023 Room 516, floor 5, building 3, No. 969, Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee before: Alibaba Dharma Institute (Hangzhou) Technology Co.,Ltd.

Country or region before: China