CN114155464B - Video data storage method and device, storage medium and terminal - Google Patents

Video data storage method and device, storage medium and terminal Download PDF

Info

Publication number
CN114155464B
CN114155464B CN202111438203.5A CN202111438203A CN114155464B CN 114155464 B CN114155464 B CN 114155464B CN 202111438203 A CN202111438203 A CN 202111438203A CN 114155464 B CN114155464 B CN 114155464B
Authority
CN
China
Prior art keywords
video data
data
scene type
video
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111438203.5A
Other languages
Chinese (zh)
Other versions
CN114155464A (en
Inventor
靳凤伟
夏曙东
孙智彬
张志平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Transwiseway Information Technology Co Ltd
Original Assignee
Beijing Transwiseway Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Transwiseway Information Technology Co Ltd filed Critical Beijing Transwiseway Information Technology Co Ltd
Priority to CN202111438203.5A priority Critical patent/CN114155464B/en
Publication of CN114155464A publication Critical patent/CN114155464A/en
Application granted granted Critical
Publication of CN114155464B publication Critical patent/CN114155464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The invention discloses a video data evidence storing method which is applied to a first client and comprises the following steps: when video data to be transmitted are acquired, acquiring a pre-trained scene type recognition model set for the video data; extracting a plurality of key video frames in the video data, and determining the scene type of each key video frame based on the model; loading driving data of a current vehicle, and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame; synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data; and sending the video data and the driving data to be transmitted to a second client, and sending the processed target video data and the driving data to a cloud server. Because this application passes through the scene type of model identification video data to combine driving data to carry out the secondary synthesis to video data, make the video of driver report difficult to falsify, promoted the authenticity of video.

Description

Video data evidence storing method and device, storage medium and terminal
Technical Field
The invention relates to the technical field of data security, in particular to a video data evidence storing method, a video data evidence storing device, a video data evidence storing medium and a video data evidence storing terminal.
Background
When a truck driver is in the scenes of loading and unloading, traffic jam, accidents, refueling, attendance card punching and the like, the truck driver needs to shoot the current operation scene through the watermark video and report the current operation scene to a truck owner or a truck owner so as to report or manage the truck.
When truck driver utilizes the cell-phone end to collect evidence among the prior art, the implementation is: truck drivers take pictures, record videos and the like by using the mobile phone sensor to obtain evidence, store corresponding evidence obtaining files in the mobile phone, and report the evidence obtaining files to a goods owner or a vehicle owner from the inside of the mobile phone. The evidence file is stored in the mobile phone and is replaced or tampered when being transmitted from the mobile phone, so that the evidence file transmitted to the back-end server is not a real evidence file, namely the authenticity of the evidence file cannot be guaranteed, and the authenticity of the video is reduced.
Disclosure of Invention
The embodiment of the application provides a video data evidence storing method, a video data evidence storing device, a storage medium and a terminal. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present application provides a video data evidence storing method, which is applied to a first client, and the method includes:
when video data to be transmitted are collected, a pre-trained scene type recognition model set for the video data is obtained;
extracting a plurality of key video frames in video data, and determining the scene type of each key video frame based on a pre-trained scene type recognition model;
loading driving data of a current vehicle, and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
and sending the video data to be transmitted and the driving data to a second client, and sending the processed target video data and the driving data to a cloud server.
Optionally, the generating a pre-trained scene type recognition model according to the following steps includes:
acquiring a scene image of a vehicle to obtain a model training sample; the scene image at least comprises a vehicle unloading scene, a vehicle refueling scene, a vehicle driving scene and a vehicle accident scene;
a scene type recognition model is created by adopting a YOLOv5 algorithm;
inputting the model training sample into a scene type recognition model for model training, and outputting a loss value;
when the loss value reaches the minimum value, generating a pre-trained scene type recognition model;
alternatively, the first and second electrodes may be,
and when the loss value does not reach the minimum value, performing back propagation on the loss value to adjust the model parameters of the scene type recognition model, and continuously inputting the model training sample into the scene type recognition model for model training.
Optionally, the scene type identification model includes an input end, a reference network, a hack network, and a Head output end;
determining a scene type of each key video frame based on a pre-trained scene type recognition model, comprising:
the input end receives each key video frame, and each key video frame is scaled to a preset size and then normalized to obtain a normalized video frame;
the reference network performs feature extraction on the normalized video frame to obtain a feature map set;
the Neck network performs feature fusion on each feature map in the feature map set and preset basic features to obtain a fused feature map;
and the Head output end adopts a classification branch to classify the fused feature map, and adopts a regression branch to perform linear regression on the classified types to obtain the scene type of each key video frame.
Optionally, a mask picture carrying parameters is constructed according to the driving data and the scene type of each key video frame, including:
acquiring a mask picture;
identifying a first parameter identification set on the mask picture;
identifying a second parameter identification set corresponding to the driving data and the scene type of each key video frame;
and identifying the parameter identifiers which are the same as the parameter identifiers in the second parameter identifier set from the first parameter identifier set, performing data mapping, and generating a mask picture carrying parameters.
Optionally, after the target video data is processed, the target video data and the driving data are sent to the cloud server together, including:
acquiring a digital watermark image;
respectively intercepting a square RGB image from an image of target video data and a digital watermark image to obtain a first image and a second image;
carrying out color channel separation on the first image to obtain a first color component set, and carrying out color channel separation on the second image to obtain a second color component set;
performing Arnold transformation on the first color component set to obtain a transformation matrix;
performing DCT (discrete cosine transformation) on the second color component set according to the transformation matrix to obtain a direct current component;
embedding a digital watermark into target video data according to the transformation matrix and the direct current component to generate processed video data;
and sending the processed video data and the processed driving data to a cloud server.
In a second aspect, an embodiment of the present application provides a video data evidence storing method, which is applied to a cloud server, and the method includes:
receiving processed video data and driving data sent by a first client aiming at a cloud server;
converting the processed video data into binary data;
performing SHA256 hash operation on the binary data and the driving data to obtain a first hash character string;
the first hash string is saved to the blockchain.
In a third aspect, an embodiment of the present application provides a video data evidence storing method, which is applied to a second client, and the method includes:
when video data to be transmitted and driving data which are sent by a first client aiming at a second client are received, establishing communication with a cloud server and acquiring a first hash character string stored in a block chain;
performing SHA256 Hash operation on video data to be transmitted and driving data to obtain a second Hash character string;
when the first hash character string is the same as the second hash character string and the digital watermark in the video data to be transmitted is correct, playing the video data to be transmitted;
alternatively, the first and second electrodes may be,
and when the first hash character string is the same as the second hash character string and the digital watermark in the video data to be transmitted is correct, determining that the authentication of the video data to be transmitted fails or is tampered, and forbidding to play the video data to be transmitted.
In a fourth aspect, an embodiment of the present application provides a video data evidence storing device, which is applied to a first client, and the device includes:
the model acquisition module is used for acquiring a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired;
the scene type identification module is used for extracting a plurality of key video frames in the video data and determining the scene type of each key video frame based on a pre-trained scene type identification model;
the mask picture construction module is used for loading driving data of the current vehicle and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
the video synthesis module is used for synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
and the video sending module is used for sending the video data to be transmitted and the driving data to the second client, and sending the processed target video data and the driving data to the cloud server.
In a fifth aspect, embodiments of the present application provide a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a sixth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
in the embodiment of the application, the video data evidence storing device firstly acquires a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired, then extracts a plurality of key video frames in the video data, determines the scene type of each key video frame based on the model, then loads driving data of a current vehicle, constructs a mask picture carrying parameters according to the driving data and the scene type of each key video frame, then synthesizes the mask picture carrying the parameters and each key video frame to generate synthesized target video data, finally transmits the video data to be transmitted and the driving data to a second client, and transmits the processed target video data and the driving data to a cloud server. According to the method and the device, the scene type of the video data is identified through the model, and the secondary synthesis is carried out on the video data by combining the driving data, so that the video reported by a driver is not easy to tamper, and the authenticity of the video is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flowchart of a video data authentication method applied to a first client according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a video data authentication method applied to a cloud server according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a video data authentication method applied to a second client according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a video data evidence storing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video data evidence storage apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The application provides a video data evidence storing method, a video data evidence storing device, a video data evidence storing medium and a video data evidence storing terminal, which are used for solving the problems in the related technical problems. In the technical scheme provided by the application, because the scene type of the video data is identified through the model and the secondary synthesis is carried out on the video data by combining the driving data, the video reported by a driver is not easy to be falsified, the authenticity of the video is improved, and the detailed description is carried out by adopting an exemplary embodiment.
The video data storage method provided in the embodiment of the present application will be described in detail below with reference to fig. 1 to 4. The method may be implemented in dependence on a computer program operable on a video data storage device based on the von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
Referring to fig. 1, a schematic flow chart of a video data evidence storing method applied to a first client is provided in an embodiment of the present application. As shown in fig. 1, the method of the embodiment of the present application may include the steps of:
s101, when video data to be transmitted are collected, a pre-trained scene type recognition model set for the video data is obtained;
generally, a scene type identification model is a mathematical model that identifies a scene type, which can output a scene type of a video image, and is created using the YOLOv5 algorithm.
In the embodiment of the application, when a scene type recognition model trained in advance is generated, firstly, a scene image where a vehicle is located is collected to obtain a model training sample; the scene image at least comprises a vehicle unloading scene, a vehicle refueling scene, a vehicle driving scene and a vehicle accident scene, a scene type recognition model is created by adopting a YOLOv5 algorithm, a model training sample is input into the scene type recognition model for model training, a loss value is output, and finally, when the loss value reaches the minimum value, a pre-trained scene type recognition model is generated.
Further, when the loss value does not reach the minimum value, the loss value is propagated reversely to adjust model parameters of the scene type recognition model, and model training samples are continuously input into the scene type recognition model for model training.
In a possible implementation mode, when a truck driver is in a scene of loading and unloading, traffic jam, accident, refueling, attendance card punching and the like, firstly, a video of a current scene is acquired through a watermark camera on a mobile phone terminal to obtain video data to be transmitted, and at the moment, a pre-trained scene type recognition model set for the video data is acquired, wherein the model can be the scene type of the video scene.
S102, extracting a plurality of key video frames in video data, and determining the scene type of each key video frame based on a pre-trained scene type recognition model;
wherein the key video frames are a plurality of high definition video images selected from the data.
In the embodiment of the application, when a plurality of key video frames in video data are extracted, firstly, image parameters, such as definition, brightness, exposure and the like, of each video image frame in the video data are calculated, then, a weight value of each video image frame is calculated according to the image parameters of each video image frame, the weight value of each video image frame is compared with a preset weight value, and the video image frame with the weight value larger than the preset weight value is determined as the key video frame.
Further, the scene type identification model comprises an input end, a reference network, a Neck network and a Head output end.
Specifically, when the scene type of each key video frame is determined based on a pre-trained scene type recognition model, an input end receives each key video frame, and each key video frame is scaled to a preset size and then normalized to obtain a normalized video frame; the reference network performs feature extraction on the normalized video frame to obtain a feature map set; the method comprises the following steps that a hack network conducts feature fusion on each feature graph in a feature graph set and preset basic features to obtain a fused feature graph; and the Head output end adopts a classification branch to classify the fused feature graph, and adopts a regression branch to linearly regress the classified types to obtain the scene type of each key video frame.
S103, loading driving data of the current vehicle, and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
in a possible implementation manner, when a mask picture carrying parameters is constructed, the mask picture is firstly obtained, then a first parameter identification set on the mask picture is identified, then a second parameter identification set corresponding to the driving data and the scene type of each key video frame is identified, finally, parameter identifications identical to the parameter identifications in the second parameter identification set are identified from the first parameter identification set for data mapping, and the mask picture carrying the parameters is generated.
Specifically, the driving data comprise real-time position longitude and latitude, inverse geocoding, license plate number, mobile phone GPS longitude and latitude information of the truck Beidou terminal, user basic information and information manually input by a user.
S104, synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
in a possible implementation manner, after a mask picture carrying parameters is generated, the mask picture carrying parameters can be synthesized with each key video frame to obtain each synthesized key video frame, and finally, a video formed by each synthesized key video frame is determined as synthesized target video data.
And S105, sending the video data to be transmitted and the driving data to a second client, processing the target video data and sending the processed target video data and the driving data to a cloud server.
In a possible implementation manner, after the synthesized target video data is obtained, the video data to be transmitted and the driving data can be sent to a second client, namely a cargo owner or a vehicle owner.
In a possible implementation manner, when the target video data and the driving data are sent to the cloud server together after being processed, the digital watermark image is firstly obtained, then the square RGB image is respectively captured from the image of the target video data and the digital watermark image, so as to obtain a first image and a second image, then the first image is subjected to color channel separation so as to obtain a first color component set, the second image is subjected to color channel separation so as to obtain a second color component set, then the first color component set is subjected to Arnold transformation so as to obtain a transformation matrix, secondly, the second color component set is subjected to DCT according to the transformation matrix so as to obtain a direct current component, then the digital watermark is embedded into the target video data according to the transformation matrix and the direct current component, so as to generate processed video data, and finally, the processed video data and the driving data are sent to the cloud server.
Specifically, a square RGB image is selected from an image of target video data and a digital watermark image, and the side length of a carrier image is a multiple of 8 and the side length of the carrier image is 8 times of that of the watermark image. Separating 3 color channels of the carrier image to obtain IR, IG and IB 3 color components; and separating 3 color channels of the digital watermark image to obtain WR, WG and WB 3 color components.
Performing Arnold transformation on the three color components of WR, WG and WB, wherein the transformation process can be regarded as the processes of stretching, compressing, folding and splicing, and obtaining WRA, WGA and WBA through transformation.
Each component of an image of target video data is divided into sub-blocks with a size of 8 × 8 as a unit. The subblocks are regarded as a whole, the position coordinate of the subblock at the leftmost upper corner is (1, 1), the position coordinate of the subblock at the adjacent right corner is (1, 2), and so on. And (3) applying DCT (discrete cosine transformation) to each subblock respectively, then taking out the direct current component at the upper left corner of each subblock after transformation to form a new matrix, taking the direct current component of the subblock with the position coordinates (1, 1) as the element of the position of the new matrix (1, 1), taking the direct current component of the subblock with the position coordinates (1, 2) as the element of the position of the new matrix (1, 2), and so on. The resulting matrices are called dc component matrices IRD, IBD, IGD.
And embedding the watermark on the direct current component matrix by adding/subtracting the brightness of k times of the scrambled WRA, WGA and WBA components. The component extraction formula is as follows:
IRDE=IRD+k×WRA;IGDE=IGD×WGA;IBDE=IBD+k×WBA;
and finally, replacing the direct-current component of each subblock by the direct-current component matrixes IRDE, IBDE and IGDE after the watermarks are embedded according to corresponding positions, and completing the watermark embedding of each color component after each subblock is subjected to inverse DCT (discrete cosine transformation) respectively to obtain the processed video data.
Specifically, the driving data is obtained based on the real-time position of the Beidou terminal of the truck, the GPS position of a user mobile phone, basic vehicle information and user information, the operation scene of the current user is automatically identified, and finally, a safe and convenient anti-tampering video report method can be specially provided for report of a truck driver by using a block chain technology and a video digital watermarking technology.
In the embodiment of the application, the video data evidence storing device firstly acquires a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired, then extracts a plurality of key video frames in the video data, determines the scene type of each key video frame based on the model, then loads driving data of a current vehicle, constructs mask pictures carrying parameters according to the driving data and the scene type of each key video frame, synthesizes the mask pictures carrying the parameters and each key video frame to generate synthesized target video data, finally transmits the video data to be transmitted and the driving data to a second client, and transmits the processed target video data and the driving data to a cloud server. Because this application passes through the scene type of model identification video data to combine driving data to carry out the secondary synthesis to video data, make the video of driver report difficult to falsify, promoted the authenticity of video.
Referring to fig. 2, a schematic flow chart of a video data certification method is provided in the embodiment of the present application, and is applied to a cloud server. As shown in fig. 2, the method of the embodiment of the present application may include the following steps:
s201, receiving processed video data and driving data which are sent by a first client aiming at a cloud server;
s202, converting the processed video data into binary data;
s203, performing SHA256 hash operation on the binary data and the traveling data to obtain a first hash character string;
s204, the first hash character string is stored in the block chain.
In the embodiment of the application, the video data evidence storing device firstly acquires a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired, then extracts a plurality of key video frames in the video data, determines the scene type of each key video frame based on the model, then loads driving data of a current vehicle, constructs mask pictures carrying parameters according to the driving data and the scene type of each key video frame, synthesizes the mask pictures carrying the parameters and each key video frame to generate synthesized target video data, finally transmits the video data to be transmitted and the driving data to a second client, and transmits the processed target video data and the driving data to a cloud server. According to the method and the device, the scene type of the video data is identified through the model, and the secondary synthesis is carried out on the video data by combining the driving data, so that the video reported by a driver is not easy to tamper, and the authenticity of the video is improved.
Referring to fig. 3, a schematic flow chart of a video data certification method is provided in the present embodiment, and is applied to a cloud server. As shown in fig. 3, the method of the embodiment of the present application may include the following steps:
s301, when video data to be transmitted and driving data which are sent by a first client aiming at a second client are received, establishing communication with a cloud server and acquiring a first hash character string stored in a block chain;
s302, performing SHA256 hash operation on the video data to be transmitted and the driving data to obtain a second hash character string;
s303, when the first hash character string is the same as the second hash character string and the digital watermark in the video data to be transmitted is correct, playing the video data to be transmitted; or when the first hash character string is the same as the second hash character string and the digital watermark in the video data to be transmitted is correct, determining that the authentication of the video data to be transmitted fails or is tampered, and forbidding the video data to be transmitted to be played.
For example, as shown in fig. 4, fig. 4 is a schematic block diagram of a process of a video data certification process provided by the present application, in a first client of a driver, after a program installed on the first client is started, an identification model is automatically downloaded, longitude and latitude of a mobile phone position and longitude and latitude of a truck beidou terminal position are automatically obtained through freight big data, then after a watermark camera is started by the first client to record a video, a key video frame in the video is extracted and input into the downloaded identification model to identify a scene type of each key video frame, a mask picture carrying parameters is obtained according to the scene type and driving data, then a synthesized video is obtained after the mask picture and each key video frame are synthesized, the synthesized video is sent to a cloud server together with the driving data after being processed, and the video and the driving data which are just recorded are sent to a second client.
After the cloud server receives the processed composite video and the driving data, SHA256 hash operation is carried out on the processed composite video and the driving data to obtain a hash character string, and the hash character string is put into a block chain to prevent tampering.
When the second client receives the video and the driving data which are recorded just now, SHA256 Hash operation is carried out on the video and the driving data which are recorded just now to obtain a target Hash character string, the Hash character string stored on the block chain is obtained, finally, whether the target Hash character string is consistent with the Hash character string stored on the block chain or not is compared, if the target Hash character string is consistent with the Hash character string stored on the block chain, the video which is recorded just now is played, and if the digital watermark is correct, authentication is failed or the video is tampered, the operation is finished.
In the embodiment of the application, the video data evidence storing device firstly acquires a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired, then extracts a plurality of key video frames in the video data, determines the scene type of each key video frame based on the model, then loads driving data of a current vehicle, constructs mask pictures carrying parameters according to the driving data and the scene type of each key video frame, synthesizes the mask pictures carrying the parameters and each key video frame to generate synthesized target video data, finally transmits the video data to be transmitted and the driving data to a second client, and transmits the processed target video data and the driving data to a cloud server. Because this application passes through the scene type of model identification video data to combine driving data to carry out the secondary synthesis to video data, make the video of driver report difficult to falsify, promoted the authenticity of video.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
Referring to fig. 5, a schematic structural diagram of a video data evidence storage apparatus according to an exemplary embodiment of the present invention is shown. The video data authentication device can be implemented as all or part of the terminal through software, hardware or a combination of the two. The device 1 comprises a model acquisition module 10, a scene type identification module 20, a mask picture construction module 30, a video synthesis module 40 and a video sending module 50.
The model acquisition module 10 is configured to acquire a pre-trained scene type recognition model set for video data when the video data to be transmitted is acquired;
the scene type identification module 20 is configured to extract a plurality of key video frames in the video data, and determine a scene type of each key video frame based on a pre-trained scene type identification model;
the mask picture construction module 30 is used for loading driving data of the current vehicle and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
the video synthesis module 40 is used for synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
and the video sending module 50 is configured to send the video data to be transmitted and the driving data to the second client, and send the processed target video data and the driving data to the cloud server together.
It should be noted that, when the video data certification device provided in the foregoing embodiment executes the video data certification method, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the video data evidence storing device and the video data evidence storing method provided by the above embodiments belong to the same concept, and details of the implementation process are shown in the method embodiments and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the advantages and disadvantages of the embodiments.
In the embodiment of the application, the video data evidence storing device firstly acquires a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired, then extracts a plurality of key video frames in the video data, determines the scene type of each key video frame based on the model, then loads driving data of a current vehicle, constructs mask pictures carrying parameters according to the driving data and the scene type of each key video frame, synthesizes the mask pictures carrying the parameters and each key video frame to generate synthesized target video data, finally transmits the video data to be transmitted and the driving data to a second client, and transmits the processed target video data and the driving data to a cloud server. Because this application passes through the scene type of model identification video data to combine driving data to carry out the secondary synthesis to video data, make the video of driver report difficult to falsify, promoted the authenticity of video.
The present invention also provides a computer readable medium, on which program instructions are stored, which when executed by a processor implement the video data credentialing method provided by the above-mentioned method embodiments.
The present invention also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the video data credentialing method of the above-described respective method embodiments.
Please refer to fig. 6, which provides a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 6, terminal 1000 can include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
The communication bus 1002 is used to implement connection communication among these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 6, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a video data certification application program.
In the terminal 1000 shown in fig. 6, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; the processor 1001 may be configured to call the video data certification application stored in the memory 1005, and specifically perform the following operations:
when video data to be transmitted are collected, a pre-trained scene type recognition model set for the video data is obtained;
extracting a plurality of key video frames in video data, and determining the scene type of each key video frame based on a pre-trained scene type recognition model;
loading driving data of a current vehicle, and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
and sending the video data to be transmitted and the driving data to a second client, and sending the processed target video data and the driving data to a cloud server.
In one embodiment, the processor 1001 specifically performs the following operations when generating the pre-trained scene type recognition model:
acquiring a scene image of a vehicle to obtain a model training sample; the scene image at least comprises a vehicle unloading scene, a vehicle refueling scene, a vehicle driving scene and a vehicle accident scene;
a scene type identification model is created by adopting a YOLOv5 algorithm;
inputting the model training sample into a scene type recognition model for model training, and outputting a loss value;
when the loss value reaches the minimum value, generating a pre-trained scene type recognition model;
alternatively, the first and second electrodes may be,
and when the loss value does not reach the minimum value, performing back propagation on the loss value to adjust the model parameters of the scene type recognition model, and continuously inputting the model training samples into the scene type recognition model for model training.
In one embodiment, when the processor 1001 determines the scene type of each key video frame based on the pre-trained scene type recognition model, it specifically performs the following operations:
the input end receives each key video frame, and each key video frame is scaled to a preset size and then normalized to obtain a normalized video frame;
the reference network performs feature extraction on the normalized video frame to obtain a feature map set;
the method comprises the following steps that a hack network conducts feature fusion on each feature graph in a feature graph set and preset basic features to obtain a fused feature graph;
and the Head output end adopts a classification branch to classify the fused feature map, and adopts a regression branch to perform linear regression on the classified types to obtain the scene type of each key video frame.
In one embodiment, when the processor 1001 constructs a mask picture carrying parameters according to the driving data and the scene type of each key video frame, the following operations are specifically performed:
acquiring a mask picture;
identifying a first parameter identification set on the mask picture;
identifying a second parameter identification set corresponding to the driving data and the scene type of each key video frame;
and identifying the parameter identifiers which are the same as the parameter identifiers in the second parameter identifier set from the first parameter identifier set, performing data mapping, and generating a mask picture carrying the parameters.
In an embodiment, when the processor 1001 performs processing on the target video data and then sends the processed target video data and the driving data to the cloud server, the following operations are specifically performed:
acquiring a digital watermark image;
intercepting a square RGB image from an image of target video data and a digital watermark image respectively to obtain a first image and a second image;
carrying out color channel separation on the first image to obtain a first color component set, and carrying out color channel separation on the second image to obtain a second color component set;
performing Arnold transformation on the first color component set to obtain a transformation matrix;
performing DCT (discrete cosine transformation) on the second color component set according to the transformation matrix to obtain a direct current component;
embedding digital watermarks into the target video data according to the transformation matrix and the direct-current component to generate processed video data;
and sending the processed video data and the processed driving data to a cloud server.
In the embodiment of the application, the video data evidence storing device firstly acquires a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired, then extracts a plurality of key video frames in the video data, determines the scene type of each key video frame based on the model, then loads driving data of a current vehicle, constructs a mask picture carrying parameters according to the driving data and the scene type of each key video frame, then synthesizes the mask picture carrying the parameters and each key video frame to generate synthesized target video data, finally transmits the video data to be transmitted and the driving data to a second client, and transmits the processed target video data and the driving data to a cloud server. According to the method and the device, the scene type of the video data is identified through the model, and the secondary synthesis is carried out on the video data by combining the driving data, so that the video reported by a driver is not easy to tamper, and the authenticity of the video is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program to instruct associated hardware, and the program stored in the computer-readable storage medium can include the processes of the embodiments of the methods described above when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A video data evidence storing method is applied to a first client side, and comprises the following steps:
when video data to be transmitted are collected, a pre-trained scene type recognition model set for the video data is obtained;
extracting a plurality of key video frames in the video data, and determining the scene type of each key video frame based on the pre-trained scene type recognition model;
loading driving data of a current vehicle, and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
and sending the video data to be transmitted and the driving data to a second client, and sending the processed target video data and the driving data to a cloud server.
2. The method of claim 1, wherein generating a pre-trained scene type recognition model comprises:
acquiring a scene image of a vehicle to obtain a model training sample; the scene image at least comprises a vehicle unloading scene, a vehicle oiling scene, a vehicle driving scene and a vehicle accident scene;
a scene type identification model is created by adopting a YOLOv5 algorithm;
inputting the model training sample into the scene type recognition model for model training, and outputting a loss value;
when the loss value reaches the minimum value, generating a scene type recognition model trained in advance;
alternatively, the first and second electrodes may be,
and when the loss value does not reach the minimum value, performing back propagation on the loss value to adjust the model parameters of the scene type recognition model, and continuously inputting the model training sample into the scene type recognition model for model training.
3. The method of claim 1, wherein the scene type recognition model comprises an input, a reference network, a Neck network, and a Head output;
the determining the scene type of each key video frame based on the pre-trained scene type recognition model comprises:
the method comprises the steps that an input end receives each key video frame, and each key video frame is zoomed to a preset size and then normalized to obtain a normalized video frame;
the reference network performs feature extraction on the normalized video frame to obtain a feature map set;
the Neck network performs feature fusion on each feature map in the feature map set and preset basic features to obtain a fused feature map;
and the Head output end adopts a classification branch to classify the fused feature map, and adopts a regression branch to perform linear regression on the classified types to obtain the scene type of each key video frame.
4. The method of claim 1, wherein the constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame comprises:
acquiring a mask picture;
identifying a first set of parameter identifications on the stencil image;
identifying a second parameter identification set corresponding to the driving data and the scene type of each key video frame;
and identifying the parameter identifiers which are the same as the parameter identifiers in the second parameter identifier set from the first parameter identifier set, performing data mapping, and generating a mask picture carrying parameters.
5. The method of claim 1, wherein the processing the target video data and the driving data together to send to a cloud server comprises:
acquiring a digital watermark image;
intercepting a square RGB image from the image of the target video data and the digital watermark image respectively to obtain a first image and a second image;
carrying out color channel separation on the first image to obtain a first color component set, and carrying out color channel separation on the second image to obtain a second color component set;
performing Arnold transformation on the first color component set to obtain a transformation matrix;
performing DCT (discrete cosine transformation) on the second color component set according to the transformation matrix to obtain a direct current component;
embedding a digital watermark into target video data according to the transformation matrix and the direct current component to generate processed video data;
and sending the processed video data and the driving data to a cloud server.
6. The method of claim 1, applied to a cloud server, comprising:
receiving processed video data and driving data which are sent by a first client aiming at the cloud server;
converting the processed video data into binary data;
performing SHA256 hash operation on the binary data and the driving data to obtain a first hash character string;
and saving the first hash character string to a block chain.
7. The method of claim 6, applied to a second client, the method comprising:
when video data to be transmitted and driving data which are sent by a first client aiming at a second client are received, establishing communication with a cloud server and acquiring a first hash character string stored in a block chain;
performing SHA256 Hash operation on the video data to be transmitted and the driving data to obtain a second Hash character string;
when the first hash character string is the same as the second hash character string and the digital watermark in the video data to be transmitted is correct, playing the video data to be transmitted;
otherwise, determining that the authentication of the video data to be transmitted fails or is tampered, and forbidding playing the video data to be transmitted.
8. A video data evidence storing device applied to a first client side is characterized by comprising:
the model acquisition module is used for acquiring a pre-trained scene type recognition model set for video data when the video data to be transmitted are acquired;
the scene type recognition module is used for extracting a plurality of key video frames in the video data and determining the scene type of each key video frame based on the pre-trained scene type recognition model;
the mask picture construction module is used for loading driving data of the current vehicle and constructing a mask picture carrying parameters according to the driving data and the scene type of each key video frame;
the video synthesis module is used for synthesizing the mask picture carrying the parameters and each key video frame to generate synthesized target video data;
the video sending module is used for sending the video data to be transmitted and the driving data to a second client side, and sending the processed target video data and the driving data to a cloud server.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any one of claims 1 to 7.
10. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-7.
CN202111438203.5A 2021-11-29 2021-11-29 Video data storage method and device, storage medium and terminal Active CN114155464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111438203.5A CN114155464B (en) 2021-11-29 2021-11-29 Video data storage method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111438203.5A CN114155464B (en) 2021-11-29 2021-11-29 Video data storage method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN114155464A CN114155464A (en) 2022-03-08
CN114155464B true CN114155464B (en) 2022-11-25

Family

ID=80784328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111438203.5A Active CN114155464B (en) 2021-11-29 2021-11-29 Video data storage method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN114155464B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116628242A (en) * 2023-07-20 2023-08-22 北京中交兴路信息科技股份有限公司 Truck evidence-storing data verification system and method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201463902U (en) * 2009-08-06 2010-05-12 安霸半导体技术(上海)有限公司 Vehicle-mounted navigation and video record integrated device
CN108476324A (en) * 2015-10-08 2018-08-31 皇家Kpn公司 Area-of-interest in the video frame of enhanced video stream
CN109191197A (en) * 2018-08-24 2019-01-11 陕西优米数据技术有限公司 Video passenger flow statistical analysis based on block chain technology
CN109361952A (en) * 2018-12-14 2019-02-19 司马大大(北京)智能系统有限公司 Video management method, apparatus, system and electronic equipment
CN110648244A (en) * 2019-09-05 2020-01-03 广州亚美信息科技有限公司 Block chain-based vehicle insurance scheme generation method and device and driving data processing system
CN110969207A (en) * 2019-11-29 2020-04-07 腾讯科技(深圳)有限公司 Electronic evidence processing method, device, equipment and storage medium
CN111428211A (en) * 2020-03-20 2020-07-17 浙江传媒学院 Evidence storage method for multi-factor authority-determining source tracing of video works facing alliance block chain
CN111506652A (en) * 2020-04-15 2020-08-07 支付宝(杭州)信息技术有限公司 Traffic accident handling method and device based on block chain and electronic equipment
CN111738218A (en) * 2020-07-27 2020-10-02 成都睿沿科技有限公司 Human body abnormal behavior recognition system and method
CN111985356A (en) * 2020-07-31 2020-11-24 星际控股集团有限公司 Evidence generation method and device for traffic violation, electronic equipment and storage medium
CN112632637A (en) * 2020-12-23 2021-04-09 杭州趣链科技有限公司 Tamper-proof evidence obtaining method, system, device, storage medium and electronic equipment
CN113326317A (en) * 2021-05-24 2021-08-31 中国科学院计算技术研究所 Block chain evidence storing method and system based on isomorphic multi-chain architecture
CN113486304A (en) * 2021-07-07 2021-10-08 广州宇诚达信息科技有限公司 Image or video piracy prevention system and method
CN113613015A (en) * 2021-07-30 2021-11-05 广州盈可视电子科技有限公司 Tamper-resistant video generation method and device, electronic equipment and readable medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6947598B2 (en) * 2001-04-20 2005-09-20 Front Porch Digital Inc. Methods and apparatus for generating, including and using information relating to archived audio/video data
CA2957567A1 (en) * 2017-02-10 2018-08-10 Spxtrm Health Inc. Secure monitoring of private encounters
US10970334B2 (en) * 2017-07-24 2021-04-06 International Business Machines Corporation Navigating video scenes using cognitive insights
CN111385102B (en) * 2020-03-20 2021-05-11 浙江传媒学院 Video copyright transaction tracing method based on parent chain

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201463902U (en) * 2009-08-06 2010-05-12 安霸半导体技术(上海)有限公司 Vehicle-mounted navigation and video record integrated device
CN108476324A (en) * 2015-10-08 2018-08-31 皇家Kpn公司 Area-of-interest in the video frame of enhanced video stream
CN109191197A (en) * 2018-08-24 2019-01-11 陕西优米数据技术有限公司 Video passenger flow statistical analysis based on block chain technology
CN109361952A (en) * 2018-12-14 2019-02-19 司马大大(北京)智能系统有限公司 Video management method, apparatus, system and electronic equipment
CN110648244A (en) * 2019-09-05 2020-01-03 广州亚美信息科技有限公司 Block chain-based vehicle insurance scheme generation method and device and driving data processing system
CN110969207A (en) * 2019-11-29 2020-04-07 腾讯科技(深圳)有限公司 Electronic evidence processing method, device, equipment and storage medium
CN111428211A (en) * 2020-03-20 2020-07-17 浙江传媒学院 Evidence storage method for multi-factor authority-determining source tracing of video works facing alliance block chain
CN111506652A (en) * 2020-04-15 2020-08-07 支付宝(杭州)信息技术有限公司 Traffic accident handling method and device based on block chain and electronic equipment
CN111738218A (en) * 2020-07-27 2020-10-02 成都睿沿科技有限公司 Human body abnormal behavior recognition system and method
CN111985356A (en) * 2020-07-31 2020-11-24 星际控股集团有限公司 Evidence generation method and device for traffic violation, electronic equipment and storage medium
CN112632637A (en) * 2020-12-23 2021-04-09 杭州趣链科技有限公司 Tamper-proof evidence obtaining method, system, device, storage medium and electronic equipment
CN113326317A (en) * 2021-05-24 2021-08-31 中国科学院计算技术研究所 Block chain evidence storing method and system based on isomorphic multi-chain architecture
CN113486304A (en) * 2021-07-07 2021-10-08 广州宇诚达信息科技有限公司 Image or video piracy prevention system and method
CN113613015A (en) * 2021-07-30 2021-11-05 广州盈可视电子科技有限公司 Tamper-resistant video generation method and device, electronic equipment and readable medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Automatic, location privacy preserving dashcam video sharing using blockchain and deep learning;Taehyoung Kim等;《Human-centric Computing and Information Sciences》;20201231;第1-23页 *
基于区块链的舞蹈类短视频版权存证方法;杨阳等;《电视技术》;20201231;第44卷(第8期);第51-59页 *
自动驾驶汽车交通事故的侵权责任分析——以Uber案为例;黄嘉佳;《上海法学研究》;20191231;第9卷;第1-7页 *

Also Published As

Publication number Publication date
CN114155464A (en) 2022-03-08

Similar Documents

Publication Publication Date Title
CN110766033B (en) Image processing method, image processing device, electronic equipment and storage medium
CN107967677B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
US9202257B2 (en) System for determining an illegitimate three dimensional video and methods thereof
JP5325267B2 (en) Object display device, object display method, and object display program
CN105354773B (en) System for evidence preservation and verification on traffic accident scene
US10469701B2 (en) Image processing method that obtains special data from an external apparatus based on information multiplexed in image data and apparatus therefor
CN103503455A (en) System and method for video caption re-overlaying for video adaptation and retargeting
CN105185121B (en) A kind of method of virtual bayonet socket parallelism recognition car plate
CN114155464B (en) Video data storage method and device, storage medium and terminal
CN111127358B (en) Image processing method, device and storage medium
CN115527101A (en) Image tampering detection method and processor
JPWO2020039897A1 (en) Station monitoring system and station monitoring method
CN114170470A (en) Sample generation method, device, equipment and storage medium
CN115345782A (en) Image processing method, image processing apparatus, computer, readable storage medium, and program product
CN114998961A (en) Virtual three-dimensional face generation method, and training method and device of face generation model
CN110941728B (en) Electronic file processing method and device
CN110100445B (en) Information processing system, information processing apparatus, and computer readable medium
CN113592941B (en) Certificate image verification method and device, storage medium and terminal
CN110012351A (en) Label data acquisition methods, memory, terminal, vehicle and car networking system
CN110740256A (en) ring camera cooperation method and related product
JPWO2020039898A1 (en) Station monitoring equipment, station monitoring methods and programs
CN106911550B (en) Information pushing method, information pushing device and system
CN109785617A (en) The processing method of traffic control information
CN113825013B (en) Image display method and device, storage medium and electronic equipment
CN117173693B (en) 3D target detection method, electronic device, medium and driving device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant