CN112530021B - Method, apparatus, device and storage medium for processing data - Google Patents

Method, apparatus, device and storage medium for processing data Download PDF

Info

Publication number
CN112530021B
CN112530021B CN202011550908.1A CN202011550908A CN112530021B CN 112530021 B CN112530021 B CN 112530021B CN 202011550908 A CN202011550908 A CN 202011550908A CN 112530021 B CN112530021 B CN 112530021B
Authority
CN
China
Prior art keywords
early warning
determining
warning event
preset early
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011550908.1A
Other languages
Chinese (zh)
Other versions
CN112530021A (en
Inventor
刘豪杰
陈睿智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011550908.1A priority Critical patent/CN112530021B/en
Publication of CN112530021A publication Critical patent/CN112530021A/en
Application granted granted Critical
Publication of CN112530021B publication Critical patent/CN112530021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Abstract

The application discloses a method, a device, equipment and a storage medium for processing data, belongs to the field of data processing, and particularly relates to the technical fields of cloud computing, deep learning and augmented reality. The specific implementation scheme is as follows: acquiring video data and point cloud data of a target area; determining an object with a changed state according to at least one video frame in the video data; determining target point cloud data of an object; determining whether a preset early warning event occurs according to the target point cloud data; and generating special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs. The implementation method can help people to understand and analyze the occurrence of events which are not perceived by people but are perceived by videos or point clouds in the real scene, and improve understanding and perception of people to the scene.

Description

Method, apparatus, device and storage medium for processing data
Technical Field
The present application relates to the field of data processing, and in particular, to the technical field of cloud computing, deep learning, and augmented reality, and more particularly, to a method, an apparatus, a device, and a storage medium for processing data.
Background
The scene information may be represented using 2D information or 3D information. The acquisition of 2D information is mainly performed by an image acquisition device. The 3D scene information is represented in the form of point cloud, and generally, people can obtain the point cloud information of indoor and outdoor scenes with high efficiency through SFM (structure from motion) technology or point cloud acquisition equipment. In some real life scenes, even if some emergencies occur, it is difficult for people to find and capture corresponding events in the video acquired by the scene images.
Disclosure of Invention
Provided are a method, apparatus, device, and storage medium for processing data.
According to a first aspect, there is provided a method for processing data, comprising: acquiring video data and point cloud data of a target area; determining an object with a changed state according to at least one video frame in the video data; determining target point cloud data of the object; determining whether a preset early warning event occurs according to the target point cloud data; and generating special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs.
According to a second aspect, there is provided an apparatus for processing data, comprising: a data acquisition unit configured to acquire video data and point cloud data of a target area; an object determining unit configured to determine an object whose state changes from at least one video frame in the video data; a point cloud determining unit configured to determine target point cloud data of the object; the judging unit is configured to determine whether a preset early warning event occurs according to the target point cloud data; and the event early warning unit is configured to generate special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs.
According to a third aspect, there is provided an electronic device for processing data, comprising: at least one computing unit; and a memory unit communicatively coupled to the at least one computing unit; wherein the storage unit stores instructions executable by the at least one computing unit to enable the at least one computing unit to perform the method as described in the first aspect.
According to a fourth aspect, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method as described in the first aspect.
According to a fifth aspect, a computer program product comprising a computer program which, when executed by a computing unit, implements the method as described in the first aspect.
According to the technology, a more efficient and striking event reminding mode is provided, people are helped to understand and analyze the occurrence of events which are not perceived by people but are perceived by videos or point clouds in a real scene, and understanding and perception of people to the scene are improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for processing data according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for processing data according to the present application;
FIG. 4 is a flow chart of another embodiment of a method for processing data according to the present application;
FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for processing data according to the present application;
fig. 6 is a block diagram of an electronic device for implementing a method for processing data of an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods for processing data or the apparatus for processing data of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include a point cloud acquisition device 101, a video acquisition device 102, a terminal device 103, a network 104, and a server 105. The network 104 is used as a medium to provide communications links between electronic devices. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may use the point cloud acquisition device 101 to acquire point cloud data in a scene and the video acquisition device 102 to acquire video data in the scene. The point cloud acquisition device 101 and the video acquisition device 102 may upload the acquired data to the terminal device 103 or the server 105 through the network 104. The point cloud acquisition device 101 may be various electronic devices capable of acquiring point cloud data, such as a laser radar sensor, a millimeter wave radar sensor, a depth camera, and the like. The video capture device 102 may be a variety of electronic devices capable of capturing video data, such as a surveillance camera, video camera, and the like.
The terminal device 103 may have various communication client applications installed therein, such as a video play class application, a point cloud visualization class application, a data processing class application, and the like. The terminal device 103 may play the video data collected by the video collection device 102 through a video play application, may display the point cloud data collected by the point cloud collection device 101 through a point cloud visualization application, and may process the video data and the point cloud data through a data processing application to determine whether a preset event occurs.
The terminal device 103 may be hardware or software. When the terminal device 103 is hardware, it may be a variety of electronic devices including, but not limited to, smartphones, tablets, car-mounted computers, laptop and desktop computers, and the like. When the terminal device 103 is software, it can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server that provides various services, such as a background server that processes point cloud data acquired by the point cloud acquisition device 101 and video data acquired by the video acquisition device 102. The background server may perform various analysis processes on the point cloud data and the video data to obtain a processing result (for example, whether an emergency occurs), and feed back the processing result to the terminal device 103.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster formed by a plurality of servers, or as a single server. When server 105 is software, it may be implemented as a plurality of software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that, the method for processing data provided in the embodiment of the present application may be performed by the terminal device 103 or may be performed by the server 105. Accordingly, the means for processing data may be provided in the terminal device 103 or in the server 105.
It should be understood that the number of point cloud acquisition devices, video acquisition devices, terminal devices, networks, and servers in fig. 1 are merely illustrative. Any number of point cloud acquisition devices, video acquisition devices, terminal devices, networks, and servers may be provided as desired.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for processing data according to the present application is shown. The method for processing data of the present embodiment includes the steps of:
in step 201, video data and point cloud data of a target area are acquired.
In the present embodiment, an execution subject of the method for processing data (e.g., the terminal device 103 or the server 105 shown in fig. 1) can acquire video data and point cloud data of a target area in various ways. For example, the executing body may acquire video data from a video acquisition device (e.g., video acquisition device 102 shown in fig. 1) and acquire point cloud data from a point cloud acquisition device (e.g., point cloud acquisition device 101 shown in fig. 1) through a network. Here, the target area refers to any area for monitoring, and may be, for example, a construction area, a road area where trouble is likely to occur, or the like.
Step 202, determining an object with a changed state according to at least one video frame in the video data.
After the execution body acquires the video data, at least one video frame in the video data can be analyzed to determine the object with the state change. For example, the execution body may compare two video frames adjacent (e.g., nth frame and n+1th frame) or spaced apart by a preset number of video frames (e.g., nth frame and n+mth frame), respectively, determine a difference between the two, and take an object indicated by the difference as an object of which a state is changed. The change in state here may be a change in position, a change in volume, a change in color, or the like. The object may be any object to be monitored and may include, for example, falling rocks, pedestrians, relics, and the like.
In step 203, target point cloud data of the object is determined.
After the object whose state is changed is determined by the video data, the target point cloud data of the object may be determined. Specifically, the execution subject may determine target point cloud data of the object in conjunction with a conversion relationship between the point cloud data and the video data. Or, the executing body may directly identify the point cloud data, and determine the target point cloud data corresponding to the object.
And 204, determining whether a preset early warning event occurs according to the target point cloud data.
After determining the target point cloud data of each object, the execution body can judge whether a preset early warning event occurs. The preset pre-alarm event herein may include, but is not limited to: the moving distance of the object exceeds a preset distance threshold, the color of the object becomes a preset color, and the object passes through a preset area.
In step 205, in response to determining that the preset early warning event occurs, a special effect animation is generated in the video data for highlighting the preset early warning event.
If the executing body determines that the preset early warning event occurs, special effect animation can be generated in the video data for highlighting the preset early warning event. The special effect animation can be an animation with a certain display effect, and a user can intuitively see the occurrence of a preset early warning event by displaying the special effect animation. Specifically, the execution subject may generate the special effects animation at the center of the video data, and the display hierarchy of the special effects animation is located at the uppermost layer. Alternatively, the execution subject may generate a special effect animation at the position of the object in the video data, thus facilitating the user's observation of the position of the object.
With continued reference to fig. 3, a schematic diagram of one application scenario of a method for processing data according to the present application is shown. In the application scenario of fig. 3, video data is collected by arranging monitoring cameras around a road where falling rocks are likely to occur, and point cloud data is collected by radar sensors. And determining that the position of the boulder changes by analyzing the video data. And determining target point cloud data of the boulder through the point cloud data. And according to the cloud data of the target point, the moving distance of the boulder is 5cm, and the possibility of falling the boulder is high, so that the special effect animation of falling the boulder is generated in the video data.
The method for processing data provided by the embodiment of the application helps people understand and analyze the occurrence of the event which cannot be perceived by people but can be perceived by video or point cloud in the real scene, and improves understanding and perception of people on the scene.
With continued reference to FIG. 4, a flow 400 of another embodiment of a method for processing data according to the present application is shown. As shown in fig. 4, the method of the present embodiment may include the steps of:
in step 401, video data and point cloud data of a target area are acquired.
In step 402, an object whose state changes is determined according to at least one video frame in the video data.
In step 403, target point cloud data of the object is determined.
Step 404, identifying the type, the moving direction and the moving distance of the object according to the cloud data of the target point; and determining that a preset early warning event occurs in response to determining that the type, the moving direction and the moving distance of the object meet preset conditions.
In this embodiment, after determining the target point cloud data of the object, the execution body may perform point cloud identification on the target point cloud data, and determine the type, the moving direction and the moving distance of the object. It will be appreciated that in determining the direction of movement and distance of movement of an object, at least two point cloud frames need to be compared. Specifically, the execution body may first determine, from the target point cloud data, a point cloud frame before the position of the object changes. The point cloud frame is taken as a reference. And comparing the at least one subsequent point cloud frame with a reference to determine the moving direction and the moving distance of the object.
The execution body can judge whether the type, the moving direction and the moving distance of the object meet preset conditions, and if so, the execution body determines that a preset early warning event occurs. The preset conditions may include, but are not limited to: the pedestrian moves 50 meters towards the construction area and the boulder falls 50 cm.
Step 405, determining three-dimensional position information of an object according to the target point cloud data; projecting the three-dimensional position information to an image plane in video data, and determining two-dimensional position information of an object; and generating special effect animation in the video data according to the two-dimensional position information for highlighting and displaying a preset early warning event.
In this embodiment, the execution subject may first determine three-dimensional position information of the object according to the target point cloud data. The three-dimensional position information may be three-dimensional coordinates. Then, the execution subject may project the three-dimensional position information to an image plane in the video data in combination with a projection relationship or a coordinate conversion matrix between the point cloud acquisition device and the video acquisition device, to determine two-dimensional position information of the object. Here, the two-dimensional position information is the planar position of the object in the video data. The executing body can generate special effect animation at the plane position for highlighting the preset early warning event.
In some alternative implementations of the present embodiment, the execution subject may implement step 405 described above specifically by the following steps not shown in fig. 4: in response to determining that the preset early warning event occurs, determining a target special effect type of the object according to the type of the object; and generating special effect animation in the video data according to the target special effect type for highlighting and displaying a preset early warning event.
In this implementation manner, if a preset early warning event occurs, the execution body may first determine the target special effect type of the object according to the type of the object. Specifically, the execution subject may query a preset correspondence between the object type and the special effect type, and determine the target special effect type of the object. The special effect types may include different display colors, different appearance types. For example, special effect type 1 is a special effect for representing the falling of a boulder, which represents the color of the boulder in red, represents the falling of the boulder in the form of "fly-out" and represents the sound emitted after the boulder falls to the ground in the form of "bang" sound. The execution subject can generate the special effect animation of the target special effect type in the video data to highlight and display the preset early warning event. It will be appreciated that the executing subject may generate the special effects animation described above at the planar position of the object in the video data.
In some alternative implementations of the present embodiment, the execution subject may implement step 405 described above specifically by the following steps not shown in fig. 4: and generating an augmented reality special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs.
In this implementation manner, the executing body may generate the augmented reality special effect animation in the video data for highlighting the preset early warning event when the preset early warning event occurs. The executing body may employ existing augmented reality techniques to generate the augmented reality special effects animation.
The method for processing data provided by the embodiment of the application can monitor the type, the moving direction and the moving distance of the object and display the data in various striking special effects when the early warning event occurs, thereby being convenient for a user to observe and supervise the scene.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an apparatus for processing data, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 5, the data processing apparatus 500 of the present embodiment includes: a data acquisition unit 501, an object determination unit 502, a point cloud determination unit 503, a judgment unit 504, and an event early warning unit 505.
The data acquisition unit 501 is configured to acquire video data and point cloud data of a target area.
The object determining unit 502 is configured to determine an object whose state changes according to at least one video frame in the video data.
The point cloud determining unit 503 is configured to determine target point cloud data of an object.
And a judging unit 504 configured to determine whether a preset early warning event occurs according to the target point cloud data.
The event early-warning unit 505 is configured to generate a special effect animation in the video data for highlighting the preset early-warning event in response to determining that the preset early-warning event occurs.
In some optional implementations of the present embodiment, the determining unit 504 may be further configured to: identifying the type, the moving direction and the moving distance of the object according to the cloud data of the target point; and determining that a preset early warning event occurs in response to determining that the type, the moving direction and the moving distance of the object meet preset conditions.
In some optional implementations of the present embodiment, the event early warning unit 505 may be further configured to: in response to determining that the preset early warning event occurs, determining a target special effect type of the object according to the type of the object; and generating special effect animation in the video data according to the target special effect type for highlighting and displaying a preset early warning event.
In some optional implementations of the present embodiment, the event early warning unit 505 may be further configured to: determining three-dimensional position information of the object according to the cloud data of the target point; projecting the three-dimensional position information to an image plane in video data, and determining two-dimensional position information of an object; and generating special effect animation in the video data according to the two-dimensional position information for highlighting and displaying a preset early warning event.
In some optional implementations of the present embodiment, the event early warning unit 505 may be further configured to: and generating an augmented reality special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs.
It should be understood that the units 501 to 505 described in the apparatus 500 for processing data correspond to the respective steps in the method described with reference to fig. 2. Thus, the operations and features described above with respect to the method for processing data are equally applicable to the apparatus 500 and the units contained therein, and are not described in detail herein.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium and a computer program product.
Fig. 6 shows a block diagram of an electronic device 600 performing a method for processing data according to an embodiment of the application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a read only memory unit (ROM) 602 or a computer program loaded from a storage unit 608 into a random access memory unit (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An I/O interface (input/output interface) 605 is also connected to the bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital signal computing unit (DSP), and any suitable computing unit, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as a method for processing data. For example, in some embodiments, the method for processing data may be implemented as a computer software program tangibly embodied on a machine-readable storage medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When a computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the method for processing data described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method for processing data by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable computing unit, which may be a special purpose or general-purpose programmable computing unit, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. The program code described above may be packaged into a computer program product. These program code or computer program products may be provided to a computing unit or controller of a general purpose computer, special purpose computer or other programmable data processing apparatus such that the program code, when executed by the computing unit 601, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable storage medium may be a machine-readable signal storage medium or a machine-readable storage medium. The machine-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solutions of the present application are achieved, and the present application is not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method for processing data, comprising:
acquiring video data and point cloud data of a target area;
determining an object with a changed state according to at least one video frame in the video data;
determining target point cloud data of the object;
determining whether a preset early warning event occurs according to the target point cloud data;
generating special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs;
wherein, the determining whether the preset early warning event occurs according to the target point cloud data comprises: identifying the type, the moving direction and the moving distance of the object according to the target point cloud data; and determining that a preset early warning event occurs in response to determining that the type, the moving direction and the moving distance of the object meet preset conditions.
2. The method of claim 1, wherein generating a special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs comprises:
in response to determining that the preset early warning event occurs, determining a target special effect type of the object according to the type of the object;
and generating a special effect animation in the video data according to the target special effect type for highlighting the preset early warning event.
3. The method of claim 1, wherein generating a special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs comprises:
determining three-dimensional position information of the object according to the target point cloud data;
projecting the three-dimensional position information to an image plane in the video data, and determining two-dimensional position information of the object;
and generating special effect animation in the video data according to the two-dimensional position information, wherein the special effect animation is used for highlighting the preset early warning event.
4. The method of claim 1, wherein generating a special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs comprises:
and generating an augmented reality special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs.
5. An apparatus for processing data, comprising:
a data acquisition unit configured to acquire video data and point cloud data of a target area;
an object determining unit configured to determine an object whose state changes from at least one video frame in the video data;
a point cloud determining unit configured to determine target point cloud data of the object;
the judging unit is configured to determine whether a preset early warning event occurs according to the target point cloud data;
an event early warning unit configured to generate a special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs;
wherein the judging unit is configured to: identifying the type, the moving direction and the moving distance of the object according to the target point cloud data; and determining that a preset early warning event occurs in response to determining that the type, the moving direction and the moving distance of the object meet preset conditions.
6. The apparatus of claim 5, wherein the event early warning unit is further configured to:
in response to determining that the preset early warning event occurs, determining a target special effect type of the object according to the type of the object;
and generating a special effect animation in the video data according to the target special effect type for highlighting the preset early warning event.
7. The apparatus of claim 5, wherein the event early warning unit is further configured to:
determining three-dimensional position information of the object according to the target point cloud data;
projecting the three-dimensional position information to an image plane in the video data, and determining two-dimensional position information of the object;
and generating special effect animation in the video data according to the two-dimensional position information, wherein the special effect animation is used for highlighting the preset early warning event.
8. The apparatus of claim 5, wherein the event early warning unit is further configured to:
and generating an augmented reality special effect animation in the video data for highlighting the preset early warning event in response to determining that the preset early warning event occurs.
9. An electronic device for processing data, comprising:
at least one computing unit; and
a storage unit in communication with the at least one computing unit; wherein, the liquid crystal display device comprises a liquid crystal display device,
the storage unit stores instructions executable by the at least one computing unit to enable the at least one computing unit to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-4.
CN202011550908.1A 2020-12-24 2020-12-24 Method, apparatus, device and storage medium for processing data Active CN112530021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011550908.1A CN112530021B (en) 2020-12-24 2020-12-24 Method, apparatus, device and storage medium for processing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011550908.1A CN112530021B (en) 2020-12-24 2020-12-24 Method, apparatus, device and storage medium for processing data

Publications (2)

Publication Number Publication Date
CN112530021A CN112530021A (en) 2021-03-19
CN112530021B true CN112530021B (en) 2023-06-23

Family

ID=74976185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011550908.1A Active CN112530021B (en) 2020-12-24 2020-12-24 Method, apparatus, device and storage medium for processing data

Country Status (1)

Country Link
CN (1) CN112530021B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111035B (en) * 2021-04-09 2022-09-23 上海掌门科技有限公司 Special effect video generation method and equipment
CN114078184B (en) * 2021-11-11 2022-10-21 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN114677848B (en) * 2022-03-16 2024-03-29 北京车网科技发展有限公司 Perception early warning system, method, device and computer program product
CN117061684A (en) * 2022-05-07 2023-11-14 北京字跳网络技术有限公司 Special effect video generation method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299207A (en) * 2013-07-19 2015-01-21 李峰 Compressed sensing based video background reconstruction and emergent mass incident early warning platform
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN111144252A (en) * 2019-12-17 2020-05-12 北京深测科技有限公司 Monitoring and early warning method for people stream analysis
CN111753609A (en) * 2019-08-02 2020-10-09 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN111767850A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Method and device for monitoring emergency, electronic equipment and medium
CN112084963A (en) * 2020-09-11 2020-12-15 中德(珠海)人工智能研究院有限公司 Monitoring early warning method, system and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871129B (en) * 2016-09-27 2019-05-10 北京百度网讯科技有限公司 Method and apparatus for handling point cloud data
US10339771B2 (en) * 2017-02-03 2019-07-02 International Business Machines Coporation Three-dimensional holographic visual and haptic object warning based on visual recognition analysis

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299207A (en) * 2013-07-19 2015-01-21 李峰 Compressed sensing based video background reconstruction and emergent mass incident early warning platform
CN111753609A (en) * 2019-08-02 2020-10-09 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN110807393A (en) * 2019-10-25 2020-02-18 深圳市商汤科技有限公司 Early warning method and device based on video analysis, electronic equipment and storage medium
CN111144252A (en) * 2019-12-17 2020-05-12 北京深测科技有限公司 Monitoring and early warning method for people stream analysis
CN111767850A (en) * 2020-06-29 2020-10-13 北京百度网讯科技有限公司 Method and device for monitoring emergency, electronic equipment and medium
CN112084963A (en) * 2020-09-11 2020-12-15 中德(珠海)人工智能研究院有限公司 Monitoring early warning method, system and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Mobile Laser Scanned Point-Clouds for Road Object Detection and Extraction: A Review;Lingfei Ma et al;《Remote Sens》;全文 *
基于全景立体视觉的交通危险预警系统设计及算法研究;朱麒文;《中国优秀硕士学位论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN112530021A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN112530021B (en) Method, apparatus, device and storage medium for processing data
US20220147822A1 (en) Training method and apparatus for target detection model, device and storage medium
EP3951741B1 (en) Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
US20230030431A1 (en) Method and apparatus for extracting feature, device, and storage medium
US20220036731A1 (en) Method for detecting vehicle lane change, roadside device, and cloud control platform
EP3933801A2 (en) Method, apparatus, and device for testing traffic flow monitoring system
CN113177968A (en) Target tracking method and device, electronic equipment and storage medium
CN111601013B (en) Method and apparatus for processing video frames
CN113326773A (en) Recognition model training method, recognition method, device, equipment and storage medium
CN112989987A (en) Method, apparatus, device and storage medium for identifying crowd behavior
CN115861400A (en) Target object detection method, training method and device and electronic equipment
CN114037087B (en) Model training method and device, depth prediction method and device, equipment and medium
CN113325381B (en) Method, apparatus, device and storage medium for processing data
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
US10885704B1 (en) 3D mapping by distinguishing between different environmental regions
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
JP7263478B2 (en) Method, device, electronic device, storage medium, roadside unit, cloud control platform and computer program for determining reliability of target detection
CN113379884B (en) Map rendering method, map rendering device, electronic device, storage medium and vehicle
CN115690496A (en) Real-time regional intrusion detection method based on YOLOv5
CN113657596A (en) Method and device for training model and image recognition
CN116229209B (en) Training method of target model, target detection method and device
CN115131472B (en) Transition processing method, device, equipment and medium for panoramic switching
CN116051925B (en) Training sample acquisition method, device, equipment and storage medium
CN113283305B (en) Face recognition method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant