CN115708116A - Video verification method, device and system, electronic equipment and storage medium - Google Patents

Video verification method, device and system, electronic equipment and storage medium Download PDF

Info

Publication number
CN115708116A
CN115708116A CN202110914275.6A CN202110914275A CN115708116A CN 115708116 A CN115708116 A CN 115708116A CN 202110914275 A CN202110914275 A CN 202110914275A CN 115708116 A CN115708116 A CN 115708116A
Authority
CN
China
Prior art keywords
video
information
verification
pieces
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110914275.6A
Other languages
Chinese (zh)
Inventor
王文龙
顾梦奇
钱成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruiting Network Technology Shanghai Co ltd
Original Assignee
Ruiting Network Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruiting Network Technology Shanghai Co ltd filed Critical Ruiting Network Technology Shanghai Co ltd
Priority to CN202110914275.6A priority Critical patent/CN115708116A/en
Publication of CN115708116A publication Critical patent/CN115708116A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present disclosure provides a video verification method, a video verification apparatus, a video verification system, an electronic device, and a non-transitory computer-readable storage medium. The video verification method is applied to a client and comprises the following steps: acquiring a shot first video to obtain video data comprising the first video; acquiring a plurality of pieces of verification information corresponding to a first video, wherein the plurality of pieces of verification information comprise a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information comprise at least one piece of hardware static information corresponding to a terminal for shooting the first video, and the plurality of pieces of dynamic information represent information generated in the process of shooting the first video and acquired by a client; and sending the video data and the plurality of verification information to the server side so that the server side can verify the first video based on the plurality of verification information and the video data.

Description

Video verification method, device and system, electronic equipment and storage medium
Technical Field
Embodiments of the present disclosure relate to a video verification method, a video verification apparatus, a video verification system, an electronic device, and a non-transitory computer-readable storage medium.
Background
Under the drive of the house source flow benefits, various black products (namely network black products) and illegal brokers are introduced to cheat and fake benefit chains of the house source videos, various cheating behaviors exist in the house source video shooting and uploading process, challenges are brought to the examination and approval work of background examination and approval personnel, the workload of the background examination and approval personnel is increased, and meanwhile challenges are brought to the maintenance of the house source quality.
Disclosure of Invention
At least one embodiment of the present disclosure provides a video verification method applied to a client, where the method includes: acquiring a shot first video to obtain video data comprising the first video; acquiring a plurality of pieces of authentication information corresponding to the first video, wherein the plurality of pieces of authentication information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a terminal which shoots the first video, and the plurality of pieces of dynamic information represent information which is acquired by the client and generated in the process of shooting the first video; and sending the video data and the verification information to a server side so that the server side can verify the first video based on the verification information and the video data.
For example, in a video verification method provided by at least one embodiment of the present disclosure, the video data further includes a video identifier corresponding to the first video, and the method further includes: generating a hardware identification based on the at least one hardware static information; binding the hardware identification with the first video to take the hardware identification as the video identification.
For example, in a video verification method provided in at least one embodiment of the present disclosure, the client includes an application program, the first video is obtained by the terminal executing the application program to perform shooting, and the acquiring multiple pieces of verification information corresponding to the first video includes: collecting the plurality of static information; and acquiring the plurality of dynamic information in the process of shooting the first video after the application program is started.
For example, in a video verification method provided by at least one embodiment of the present disclosure, the plurality of static information further includes at least one piece of software static information corresponding to the application program, and the at least one piece of hardware static information includes one or more of the following pieces of information: the terminal comprises an international mobile equipment identity code of the terminal, a media access control address of the terminal, a model of a central processing unit of the terminal, root information of the terminal, a model of a main board of the terminal, an equipment identity number of the terminal and manufacturer information of a read-only memory of the terminal; the at least one piece of software static information includes one or more of the following pieces of information: the method comprises the following steps of setting the version number of the application program, debugging information, read-only memory information, root information of the application program and a package name.
For example, in a video verification method provided by at least one embodiment of the present disclosure, the plurality of pieces of dynamic information include at least one piece of hardware dynamic information and at least one piece of software dynamic information, where the at least one piece of hardware dynamic information is dynamic information generated by the terminal and collected during shooting of the first video, and the at least one piece of hardware dynamic information includes one or more of the following pieces of information: the system comprises sensor information, network information, equipment hardware information and external equipment information, wherein the sensor information comprises one or more of the following data: the device comprises an acceleration sensor, a gyroscope sensor, a gravity sensor, a rotation vector sensor, a pedometer sensor, a direction sensor, a geomagnetic rotation vector sensor and an illumination sensor, wherein the network information comprises wireless hotspot information and mobile network base station information, the device hardware information comprises GPS information, the external device information comprises Bluetooth device information and external storage device information, the at least one piece of software dynamic information is dynamic information generated when a user shooting the first video uses the application program and is collected in the process of shooting the first video, and the at least one piece of software dynamic information comprises one or more of the following pieces of information: internet protocol address information, risk plug-in, memory information, terminal bandwidth load, method stack, hook risk, behavior log, and software error reporting information log.
For example, in the video verification method provided in at least one embodiment of the present disclosure, the at least one piece of hardware dynamic information is acquired based on a predetermined acquisition rule after the client is started.
For example, in the video verification method provided in at least one embodiment of the present disclosure, the plurality of static information further includes object information, where the object information is information that represents an object in the first video and is input by a user who captures the first video through a terminal where the client is located.
At least one embodiment of the present disclosure provides a video verification method applied to a server, where the method includes: receiving video data including a first video and a plurality of pieces of verification information corresponding to the first video, wherein the video data include the first video and the plurality of pieces of verification information are transmitted from a client, the plurality of pieces of verification information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a video shooting terminal that shoots the first video, and the plurality of pieces of dynamic information represent information acquired in a process of shooting the first video; validating the first video based on the plurality of validation information and the video data.
For example, in a video verification method provided by at least one embodiment of the present disclosure, the client is located in the video capturing terminal or in a terminal different from the video capturing terminal, the video data further includes a video identifier corresponding to the first video, and verifying the first video based on the plurality of pieces of verification information and the video data includes: determining a hardware identification based on the at least one hardware static information; acquiring a video identifier in the video data; in response to the hardware identification and the video identification being consistent, determining that the first video is a real video shot by the video shooting terminal.
For example, in a video verification method provided in at least one embodiment of the present disclosure, verifying the first video based on the plurality of pieces of verification information and the video data includes: acquiring a target type and a target number, wherein the target type represents a preset information type of verification information required to be acquired in the process of shooting the first video, and the target number represents a preset number of verification information required to be acquired in the process of shooting the first video; determining the first video to be a real video captured by the video capturing terminal in response to the types of the plurality of authentication information being identical to the target type and the number of the plurality of authentication information being the same as the target number.
For example, in a video verification method provided in at least one embodiment of the present disclosure, the plurality of static information further includes object information, where the object information is information that represents an object in the first video and is input by a user who captures the first video through a terminal where the client is located, and the first video is verified based on the plurality of pieces of verification information and the video data, and the method further includes: determining a target object based on the object information; verifying consistency between the first video and the target object based on the plurality of verification information and the video data.
For example, in a video verification method provided in at least one embodiment of the present disclosure, verifying consistency between the first video and the target object based on the plurality of pieces of verification information and the video data includes: obtaining at least one piece of reference verification information, wherein the at least one piece of reference verification information comprises at least one of the following information: processing at least one piece of verification information in the plurality of pieces of verification information and the at least one piece of reference verification information to obtain at least one verification result, wherein each verification result is a verification result corresponding to at least one piece of verification information; verifying a correspondence between the first video and the target object based on the at least one verification result.
For example, in a video verification method provided in at least one embodiment of the present disclosure, verifying consistency between the first video and the target object based on the plurality of pieces of verification information and the video data includes: processing at least one piece of verification information in the plurality of pieces of verification information through a machine learning model to obtain at least one verification result; verifying a correspondence between the first video and the target object based on the at least one verification result.
For example, in a video verification method provided by at least one embodiment of the present disclosure, verifying consistency between the first video and the target object based on the at least one verification result includes: determining at least one weight corresponding to the at least one verification result one by one; processing the at least one verification result and the at least one weight to obtain a scoring result; verifying a correspondence between the first video and the target object based on the scoring result.
For example, in a video verification method provided by at least one embodiment of the present disclosure, verifying the consistency between the first video and the target object based on the score result includes: determining that the first video and the target object are consistent when the score result is greater than or equal to a first predetermined threshold; determining that the first video and the target object are not consistent when the score result is less than a second predetermined threshold; when the score result is smaller than the first preset threshold and larger than the second preset threshold, acquiring historical information of a user shooting the first video, and determining that the first video and the target object are consistent in response to the historical information indicating that the user does not have cheating behaviors; and determining that the first video and the target object are not consistent in response to the user history information indicating that the user has cheating behavior.
For example, in a video verification method provided in at least one embodiment of the present disclosure, the target object is an interior space and/or an exterior space of a building.
At least one embodiment of the present disclosure provides a video verification apparatus, which is applied to a client, where the video verification apparatus includes: the acquisition module is configured to acquire a shot first video to obtain video data comprising the first video; the acquisition module is configured to acquire a plurality of pieces of authentication information corresponding to the first video, wherein the plurality of pieces of authentication information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a terminal which shoots the first video, and the plurality of pieces of dynamic information represent information which is acquired by the client and generated in the process of shooting the first video; the transmission module is configured to send the video data and the plurality of verification information to a server side, so that the server side can verify the first video based on the plurality of verification information and the video data.
At least one embodiment of the present disclosure provides a video verification apparatus, which is applied to a server, wherein the video verification apparatus includes: the system comprises a receiving module, a processing module and a display module, wherein the receiving module is configured to receive video data comprising a first video and a plurality of pieces of verification information corresponding to the first video, the video data being transmitted from a client, the plurality of pieces of verification information comprising a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information comprising at least one piece of hardware static information corresponding to a video shooting terminal shooting the first video, and the plurality of pieces of dynamic information representing information acquired in the process of shooting the first video; a verification module configured to verify the first video based on the plurality of verification information and the video data.
At least one embodiment of the present disclosure provides an electronic device, including: a processor and a memory. The memory stores computer readable instructions which, when executed by the processor, implement the video verification method of any of the above embodiments.
At least one embodiment of the present disclosure provides a video verification system, including: client and server. The client is configured to: acquiring a shot first video to obtain video data comprising the first video; acquiring a plurality of pieces of authentication information corresponding to the first video, wherein the plurality of pieces of authentication information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a terminal which shoots the first video, and the plurality of pieces of dynamic information represent information which is acquired by the client and generated in the process of shooting the first video; sending the video data and the verification information to the server side; the server side is configured to: receiving the video data and the plurality of authentication information; validating the first video based on the plurality of validation information and the video data.
At least one embodiment of the present disclosure provides a non-transitory computer-readable storage medium storing computer-readable instructions, wherein when the computer-readable instructions are executed by a processor, the video verification method according to any one of the above embodiments is implemented.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure and are not limiting to the present disclosure.
Fig. 1 is a schematic diagram of a video verification method according to at least one embodiment of the present disclosure;
fig. 2 is a schematic diagram of another video verification method according to at least one embodiment of the present disclosure;
fig. 3 is a schematic block diagram of a video authentication apparatus according to at least one embodiment of the present disclosure;
fig. 4 is a schematic block diagram of another video authentication apparatus provided in at least one embodiment of the present disclosure;
fig. 5 is a schematic block diagram of a video verification system provided in at least one embodiment of the present disclosure;
fig. 6 is a flowchart of a video verification method according to an embodiment of the present disclosure;
fig. 7 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure;
fig. 8 is a schematic block diagram of a non-transitory computer-readable storage medium provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The use of "first," "second," and similar terms in this disclosure is not intended to indicate any order, quantity, or importance, but rather is used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of some known functions and components have been omitted from the present disclosure.
At present, judge condition and dimension comparison limitation to the approval of house source video, only examine the content of the video data that the broker uploaded purely, and do not verify the authenticity of the video data who uploads and the authenticity of equipment, various black productions and the fake data that upload based on the action of cheating are very easy through examining and approving, then upload to network platform, thereby there are various false house sources in making network platform, thereby the cost of verifying the house source off-line has been increased, increase staff's work load, the while has also brought not good experience for the user.
At least one embodiment of the present disclosure provides a video verification method, a video verification apparatus, a video verification system, an electronic device, and a non-transitory computer-readable storage medium. The video verification method is applied to a client and can comprise the following steps: acquiring a shot first video to obtain video data comprising the first video; acquiring a plurality of pieces of verification information corresponding to a first video, wherein the plurality of pieces of verification information comprise a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information comprise at least one piece of hardware static information corresponding to a terminal for shooting the first video, and the plurality of pieces of dynamic information represent information generated in the process of shooting the first video and acquired by a client; and sending the video data and the plurality of verification information to the server side so that the server side can verify the first video based on the plurality of verification information and the video data.
In the embodiment of the disclosure, by collecting the verification information of multiple dimensions related to the video uploaded to the server, the video can be effectively verified based on the verification information and the video itself, for example, the authenticity of the video is verified. For example, the video verification method can be applied to verifying videos of houses, so that black products and cheating behaviors in video shooting can be effectively identified and judged, the cheating and counterfeiting cost of the black products is increased, the counterfeit house source videos are avoided or reduced, the authenticity of house sources on a network platform is greatly improved, the quality of the house sources is improved, the user experience of users when the users browse the house sources is improved, and the cost of checking the authenticity of the house sources under the artificial line of operators is greatly reduced.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings, but the present disclosure is not limited to these specific embodiments.
It should be noted that the following embodiments of the present disclosure are described taking a video of a house (for example, a commercial house or a second-hand house) as an example, however, it should be understood that the embodiments of the present disclosure are not limited to the video of the house, but may be videos of other objects (for example, a scene such as a park, a virtual modeled scene, various temporarily-built scenes, and the like).
Fig. 1 is a schematic diagram of a video verification method according to at least one embodiment of the present disclosure.
At least one embodiment of the present disclosure provides a video verification method, which is applied to a client and can be executed by a processor or a computer. For example, the client may be applied to a terminal, the terminal may be various mobile terminals, a fixed terminal (for example, the fixed terminal may be connected with a camera to control the camera to capture a video), and the like, for example, the client may include an application (App), an applet in various applications, and the like, which is not limited in this respect by the embodiments of the present disclosure. The application may be, for example, "live guest," "58 city," etc. The mobile terminal can be a mobile phone, a tablet computer, a portable computer, and the like. The fixed terminal may be a desktop computer or the like. The operating system of the terminal can be an android system, an IOS system, a Hongmon system and the like.
For example, as shown in fig. 1, the video authentication method includes the following steps S10-S12.
As shown in fig. 1, in step S10: and acquiring the shot first video to obtain video data comprising the first video.
For example, the first video is a video taken by a terminal where the client is located. In one example, the terminal may include a video capture device (e.g., a camera, etc.), and the terminal may control the video capture device to capture the first video.
For example, the terminal may accept an instruction to capture a video and control the video capture device to begin capturing the first video based on the instruction. The instruction can be issued to the terminal by the user through touch, language control, keyboard input and the like.
For example, the first Video may be a Video of various formats, such as MPEG (Moving Picture Experts Group) AVI (Audio Video Interleaved format), MOV (QuickTime, movie format), FLV (Flash Video), and the like.
As shown in fig. 1, in step S11: a plurality of verification information corresponding to the first video is collected.
For example, the plurality of pieces of authentication information include a plurality of pieces of static information including at least one piece of hardware static information corresponding to a terminal that captured the first video, and a plurality of pieces of dynamic information indicating information generated during capturing the first video and collected by the client. For example, the client includes a collection module, and during shooting of the first video, the collection module can collect a plurality of dynamic information; the acquisition module may also acquire a plurality of static information, i.e. the acquisition module is configured to implement step S11 described above.
As shown in fig. 1, in step S12: and sending the video data and the plurality of verification information to the server side so that the server side can verify the first video based on the plurality of verification information and the video data.
For example, the server side may be a local server side, a remote server side, a network server side (e.g., a cloud side), and the like.
For example, before the video data and the plurality of authentication information are transmitted to the server side, the video data and the plurality of authentication information may be encrypted, and then the encrypted video data and the plurality of authentication information may be transmitted to the server side, thereby ensuring security in the process of transmitting the video data and the plurality of authentication information.
For example, the video data further includes a video identifier corresponding to the first video, and the video identifier and the first video are packaged together to be a whole and transmitted to the server side.
For example, in one embodiment, the client is implemented as an application program, and the first video is obtained by shooting through the application program executed by the terminal, for example, the application program may control the video capture device to shoot, so as to obtain the first video.
For example, in some embodiments, step S11 may include: collecting a plurality of static information; in the process of shooting the first video after the application program is started, a plurality of dynamic information are collected.
For example, in some embodiments, in the whole process from the user starting the application program and jumping to the shooting content page for actual shooting to the last transmission of the data (i.e., the video data and the plurality of verification information) to the server, the information is collected from multiple dimensions to obtain the plurality of verification information, and at this time, a plurality of static information may be collected when the application program is started; in the whole process, a plurality of dynamic information is collected, for example, in the process of shooting the first video, a plurality of dynamic information is collected. For another example, in other embodiments, a plurality of static information is collected when the application is installed in the terminal. It should be noted that, the time for collecting the verification information is not particularly limited in the embodiments of the present disclosure.
For example, from the perspective of an Android (Android) hardware device, hardware information related to a terminal that shoots a first video is collected to obtain at least one piece of hardware static information. For example, the at least one hardware static information includes one or more of the following respective information: an International Mobile Equipment Identity (IMEI) of the terminal, a Media Access Control Address (Mac) of the terminal, a model of a Central Processing Unit (CPU) of the terminal, root (super user authority) information of the terminal, a model of a main board of the terminal, an Equipment Identity number (ID, e.g., UDID, UUID, etc.) of the terminal, vendor information of a read only memory (Rom) of the terminal, memory information of the terminal, whether the terminal is in a virtual machine environment, and the like.
For example, the video verification method further includes: generating a hardware identifier based on the at least one hardware static information; and binding the hardware identification with the first video so as to take the hardware identification as the video identification. For example, the video identifier is a hardware identifier, so that whether the first video is a video really shot by the terminal can be judged based on at least one piece of hardware static information in the plurality of pieces of verification information and the video identifier.
For example, at least one piece of hardware information corresponding to the terminal is combined to generate a hardware identifier, which is unique and used for marking the unique identifier of the user in the process of shooting the first video and transmitting the data (the video data and the plurality of authentication information). In some embodiments, the hardware identifier may correspond to a terminal or an application, which is not limited in this disclosure.
For example, the plurality of static information further includes at least one software static information corresponding to the application. The method comprises the steps of collecting dimensions such as version number, package name, debug (debug) state and debug state of an application program (for example, software) installed by a user to obtain at least one piece of software static information. For example, the at least one software static information includes one or more of the following: the method includes the following steps of generating a package name (for example, the package name represents a unique identifier of the application program designed according to the principle conforming to the Android standard), debugging state information, debug state information and the like of the application program.
For example, in some embodiments, the version number of the application program may be collected, and the version number of the application program may be compared with the version number of the application program installed in a regular channel to determine whether the currently running application program is installed in the regular channel and is a regular application program, so that it may be verified whether the first video is obtained by shooting the application program installed in the regular channel by the terminal.
For example, in some embodiments, the debug status and debug status of the application are obtained to determine. If any one of the debugging mode and the Debug state is on, it indicates that the first video may not be real, that is, a process of shooting the first video may have a cheating behavior, or a probability of the existence of the cheating behavior is high.
For example, in some embodiments, the plurality of static information further includes object information, and the object information is information representing an object in the first video, which is input by a user who captures the first video through a terminal where the client is located.
It should be noted that, in the embodiment of the present disclosure, "the information indicating the object in the first video, which is input through the terminal where the client is located," may indicate that the user inputs the information indicating the object in the first video through touch control, a keyboard (a virtual keyboard, a physical keyboard, or the like), language control, or the like, and may also indicate that the user selects a preset option in a display screen of the terminal to input the information indicating the object in the first video.
For example, the terminal photographs an object (e.g., a house) to obtain a first video, i.e., the content displayed in the first video is the object.
In the embodiment of the present disclosure, in step S12, verifying the first video means verifying the authenticity of the first video, "authenticity of the first video" includes two meanings: the first aspect means that the first video is a real video of an object (e.g., a video of a real house), that is, an object in the first video is actually present; the second aspect indicates that the first video is consistent with object information input by a user uploading the first video.
It should be noted that, in some embodiments, an acquisition rule may be set in advance for each piece of static information, so that in the information acquisition process, the static information may be acquired according to the acquisition rule. The collection rule may be set according to actual conditions, and may be various suitable rules, and the collection rule is not particularly limited in the embodiments of the present disclosure.
For example, in some embodiments, the plurality of dynamic information includes at least one hardware dynamic information and at least one software dynamic information. The at least one piece of hardware dynamic information is dynamic information generated by a terminal where a client is located and collected in the process of shooting the first video. The at least one piece of software dynamic information is dynamic information which is collected in the process of shooting the first video and is generated when a user shooting the first video uses an application program.
For example, the collection of hardware dynamic information is the key point for collecting verification information, and is mainly used for collecting various dynamic data generated by a terminal in the use process of a user. For example, dynamic data of various sensors of the terminal, a change value of a memory of the terminal, a CPU and a dynamic load of the memory usage (CPU load, which indicates a resource utilization rate of CPU occupied in an application program usage process), a change of location information of the terminal, a change of information of an external device of the terminal, a change state of a battery of the terminal, a network connection state and a traffic change state of the terminal, information of an external terminal connected to bluetooth of the terminal, network base station information scanned by the terminal, wireless hotspot information connected to the terminal, and the like may be collected to obtain at least one piece of hardware dynamic information. For example, the at least one hardware dynamic information includes one or more of the following: sensor information, network information, device hardware information, external device information, and the like.
For example, the sensor information includes one or more of the following data: data of an acceleration sensor, data of a gyro sensor, data of a gravity sensor, data of a rotation vector sensor, data of a pedometer sensor, data of a direction sensor (for sensing the direction of a screen), data of a geomagnetic rotation vector sensor, data of an illumination sensor, data of an angular velocity sensor, data of a temperature and humidity sensor, data of a magnetic force sensor, data of a proximity sensor, data of a magnetic field sensor, data of a pressure sensor, and the like. The acceleration sensor, the gyroscope sensor, the gravity sensor, the angular velocity sensor, the rotation vector sensor and the pedometer sensor belong to dynamic sensors; the magnetic force sensor, the proximity sensor, the magnetic field sensor, the direction sensor and the geomagnetic rotation vector sensor belong to position sensors; illumination sensor, temperature and humidity sensor and pressure sensor belong to environmental sensor.
For example, the network information includes one or more of the following pieces of information: wireless fidelity (WIFI) hotspot information, mobile network base station information, network traffic usage information, and the like.
For example, the device hardware information includes one or more of the following pieces of information: GPS (Global Positioning System) information, battery change state information, CPU/memory usage information, and the like.
For example, the external device information may include one or more of the following pieces of information: bluetooth device information (i.e., information of an external device to which bluetooth is connected), external storage device information, other external hardware device information, and the like.
For example, the at least one hardware dynamic information is acquired based on a predetermined acquisition rule after a client (e.g., an application) is started. An acquisition rule can be preset for each piece of hardware dynamic information, so that the hardware dynamic information can be acquired according to the acquisition rule in the information acquisition process.
For example, in some embodiments, when acquiring data of sensors (e.g., the above dynamic sensor, position sensor, and environment sensor), the acquisition module of the client may perform an operation of acquiring sensor data output by each sensor (i.e., an acceleration sensor, a gyroscope sensor, a gravity sensor, an angular velocity sensor, a rotation vector sensor, a pedometer sensor, a magnetic sensor, a proximity sensor, a magnetic field sensor, a direction sensor, a geomagnetic rotation vector sensor, an illumination sensor, a temperature and humidity sensor, and a pressure sensor, etc.) of the terminal at regular time intervals. At this time, the acquisition rule corresponding to the sensor information may be: data is collected at certain time intervals. The time interval may be set according to practical circumstances, and the present disclosure does not specifically limit this.
For another example, in other embodiments, when collecting data of sensors (e.g., the dynamic sensor, the position sensor, and the environmental sensor), the collecting module of the client may only collect sensor data output by each sensor that is greater than a preset value. At this time, the acquisition rule corresponding to the sensor information may be: and collecting sensor data larger than a preset value. The preset value can be set according to practical situations, and the disclosure does not specifically limit this.
For another example, in other embodiments, when data of sensors (e.g., the dynamic sensor, the location sensor, and the environmental sensor) is collected, the collection module of the client may collect, at regular time intervals, sensor data output by each sensor that is greater than a preset value. At this time, the acquisition rule corresponding to the sensor information may be: and acquiring sensor data larger than a preset value according to a certain time interval.
The collection of all dynamic sensor data output by each sensor is avoided through the limitation of a preset collection rule, and the storage space for storing the collected sensor data is saved, so that the resources can be saved; meanwhile, the frequency of collecting the sensor data output by the sensor can be reduced, or the frequency of collecting the sensor data output by the sensor is set in a user-defined mode according to the time sent by the terminal, so that resources are further saved, the occupied memory is saved, and the fluency of the main operation flow of a user is not influenced. In addition, the quantity of the acquired sensor data is less than that of all the sensing data output by the sensor, so that the analysis and the processing of subsequent steps (such as the process of verifying the first video) are facilitated, the execution efficiency is improved, and the effect of saving resources is further achieved.
For example, software dynamic information generated when a user uses an application may be collected. For example, information such as a method call stack corresponding to the application program, a network state of the application program, whether a risk plug-in or cheating auxiliary software is run in a background of the terminal, whether Hook risk exists in the application program, and the like may be collected to obtain at least one piece of software dynamic information. For example, the at least one software dynamic information includes one or more of the following: internet protocol address information, risk plug-in, memory information, terminal bandwidth load (network usage load, representing the change of network traffic in the application usage process), method stack, hook risk, behavior log, software error reporting information log, etc.
For example, a real-time method call stack generated when a button is clicked in the process of page operation performed by a user is used for judging whether the user enters a shooting page or uploads a video page through a normal operation process, rather than directly calling a specific function of a certain page in a hook manner.
For example, when a user opens an application program on a terminal to shoot, whether a hook frame exists in the running environment of the application program, whether a dangerous Class name and a method name of the hook exist, whether a background simulation click tool is opened, whether a root tool is opened, and the like are detected, so that dynamic information of software is acquired.
For example, the at least one piece of software dynamic information is acquired based on a predetermined acquisition rule after a client (e.g., an application) is started. An acquisition rule can be preset for each piece of software dynamic information, so that the software dynamic information can be acquired according to the acquisition rule in the information acquisition process. For example, the collection rule may be: the collection rule may be other suitable rules based on collecting the software dynamic information at certain time intervals, and the embodiment of the present disclosure does not specifically limit the collection rule.
For example, each item of the plurality of dynamic information includes a time when the dynamic information was collected. When analyzing the dynamic information, the server side may analyze whether the dynamic information is abnormal based on time in the dynamic information, so as to determine whether a cheating action exists in the process of shooting the first video.
For example, for a plurality of sensor data output by a sensor, the plurality of sensor data are discrete data, and based on a time included in the plurality of sensor data, a change exhibited by the plurality of sensor data over a certain time may be determined. For example, in the process of shooting the first video, since the user moves, the data sensed by the gyro sensor on the terminal should change constantly, and if the two pieces of gyro data output by the gyro sensor are the same, but the two pieces of gyro data correspond to different times, it can be determined that a cheating behavior exists in the process of shooting the first video, or the probability of the cheating behavior existing in the process of shooting the first video is high. For another example, in the process of capturing the first video, if the position sensor outputs two position data, but the two position data correspond to different times, respectively, the moving speed of the user in the process of capturing the first video may be calculated based on the difference between the two position data and the difference between the two positions data, and if the moving speed is substantially the same as the normal moving speed of the user, it may be determined that no cheating act exists in the process of capturing the first video, or the probability of the cheating act existing in the process of capturing the first video is low, and if the moving speed is much greater than or much less than the normal moving speed of the user, it may be determined that the cheating act exists in the process of capturing the first video, or the probability of the cheating act existing in the process of capturing the first video is high. The analysis process of the remaining dynamic information may be set according to the specific type of the dynamic information, and may be similar to or different from the method described above, and this is not specifically limited in this embodiment of the disclosure.
Fig. 2 is a schematic diagram of another video verification method according to at least one embodiment of the present disclosure.
At least one embodiment of the present disclosure further provides a video verification method, which is applied to a server side and can be executed by a processor or a computer. For example, the server side may be a local server side, a remote server side, a network server side, and the like.
For example, as shown in fig. 2, the video authentication method includes the following steps S20 to S21.
For example, as shown in fig. 2, in step S20: the method includes receiving video data including a first video and a plurality of authentication information corresponding to the first video transmitted from a client. For example, the client may include an application program or the like.
For example, the plurality of pieces of authentication information include a plurality of pieces of static information including at least one piece of hardware static information corresponding to a video capturing terminal that captures the first video, and a plurality of pieces of dynamic information indicating information acquired in a process of capturing the first video. For example, a plurality of dynamic information may be collected by the client. For example, the verification information may be information acquired in step S11 of the video verification method shown in fig. 1, and for a description of the verification information, reference may be made to the embodiment of the video verification method shown in fig. 1, which is not described herein again.
For example, as shown in fig. 2, in step S21: the first video is authenticated based on the plurality of authentication information and the video data.
For example, the server may include an information analysis and decision module and an information evaluation and scoring module, where the information analysis and decision module is configured to process the collected verification information and video data, perform multi-dimensional cross recognition and judgment on the first video based on the collected verification information, and calculate at least one verification result based on multiple verification information; the information evaluation and scoring module is used for calculating a scoring result based on at least one verification result. Based on the scoring result, the first video may be verified. For example, the information analysis and decision module and the information evaluation and scoring module are combined to implement the above step S21.
For example, the information analysis and decision module may identify a plurality of verification information to obtain an identification result of the verification information. The identification result of the verification information can comprise data with normal operation and data with abnormal operation, wherein the normal operation indicates that the terminal and the client are not tampered, and no cheating behavior exists or the probability of cheating is low in the video shooting process; the abnormal operation indicates that the terminal and the client are possibly tampered, and the probability of cheating behavior or cheating is high in the video shooting process. The data with normal operation indicates that the corresponding verification information indicates that the terminal and the client are not tampered and the video shooting process has no cheating behavior or has low cheating probability, the data with abnormal operation indicates that the corresponding verification information indicates that the terminal and the client are possibly tampered and the video shooting process has cheating behavior or has high cheating probability.
For example, since the authentication information obtained based on the black production may not be complete, the integrity of a plurality of authentication information may be checked to determine whether there is a cheating action. For example, in some embodiments, step S21 may include: acquiring a target type and a target number, wherein the target type represents a preset information type of verification information required to be acquired in the process of shooting the first video, and the target number represents a preset number of verification information required to be acquired in the process of shooting the first video; in response to the types of the plurality of authentication information being identical to the target type and the number of the plurality of authentication information being the same as the target number, determining the first video as a real video captured by the video capturing terminal.
For example, in step S21, the integrity of the plurality of verification information is checked, and in response to that the plurality of verification information is incomplete, that is, the types of the plurality of verification information and the types of the targets at least partially do not coincide and/or the number of the plurality of verification information and the number of the targets are not the same, it is determined that a cheating action exists in the process of shooting the first video, or the probability that the cheating action exists in the process of shooting the first video is higher; in response to the plurality of verification information being complete, that is, the plurality of verification information being of the same type as the target type and the plurality of verification information being of the same number as the target number, it is determined that there is no cheating action in the process of capturing the first video or that there is a low probability that there is a cheating action in the process of capturing the first video. For example, if the application program is preset to collect 20 pieces of verification information, and sets types of the verification information to be collected respectively, if the server detects that the terminal transmits 20 pieces of verification information, that is, the terminal collects all pieces of verification information to be collected, and the types of the 20 pieces of verification information are consistent with the preset types of the information to be collected, it may be determined that there is no cheating behavior in the process of shooting the first video, or the probability of the cheating behavior in the process of shooting the first video is low; if the server side detects that the terminal only transmits 10 pieces of verification information, the server side indicates that the terminal only collects part of verification information, so that the fact that cheating exists in the process of shooting the first video or the probability that the cheating exists in the process of shooting the first video is higher; if the server detects that the terminal only transmits 20 pieces of verification information, but the types of the 20 pieces of verification information are at least partially inconsistent with the type of the preset information needing to be collected, it can be determined that a cheating action exists in the process of shooting the first video, or the probability that the cheating action exists in the process of shooting the first video is higher.
For example, in some embodiments, the client is located in a video capture terminal or in a different terminal than the video capture terminal. The video shooting terminal is a terminal that shoots a first video in the embodiment of the video authentication method shown in fig. 1. The video data further includes a video identifier corresponding to the first video, and step S21 may include: determining a hardware identification based on the at least one hardware static information; acquiring a video identifier in video data; and in response to the hardware identification and the video identification being consistent, determining the first video as a real video shot by the video shooting terminal.
For example, the hardware identification may correspond to a video capture terminal.
For example, the plurality of static information further includes object information, where the object information is information indicating an object in the first video, which is input by a user who captures the first video through a terminal where the client is located, and step S21 may further include: determining a target object based on the object information; the consistency between the first video and the target object is verified based on the plurality of verification information and the video data.
For example, the object information may indicate address information of a target object, for example, an XX cell XX floor XX number XX (door), or the like.
For example, the target object is an internal space and/or an external space of a building (e.g., a house (a commercial room, a second-hand room, a mall, etc.)), or the like.
For example, in some embodiments, verifying the correspondence between the first video and the target object based on the plurality of verification information and the video data comprises: acquiring at least one piece of reference verification information; processing at least one piece of verification information in the plurality of pieces of verification information and at least one piece of reference verification information to obtain at least one verification result; based on the at least one verification result, a correspondence between the first video and the target object is verified.
For example, each verification result is a verification result corresponding to at least one piece of verification information.
For example, the plurality of pieces of verification information may be divided into a plurality of information groups, each information group including a plurality of pieces of verification information, one corresponding to each verification result, and the plurality of information groups include an information group for verifying a terminal environment, an information group for verifying system information, an information group for verifying device fingerprint information, an information group for verifying a user behavior, and an information group for verifying whether the device is tampered with, in which case, at least one verification result may include a terminal environment verification result (InstalledApp _ info), a system information verification result (systemjnfo), a device fingerprint information verification result (devlnfo), a user behavior verification result (devk _ Dev), and a device tampering verification result (safe _ Dev). For example, a terminal environment verification result may be calculated based on the verification information in the information group for verifying the terminal environment, that is, the verification information related to the terminal environment security and the reference verification information corresponding to the verification information are compared to obtain at least one intermediate verification result related to the terminal environment security, and the terminal environment verification result may be obtained based on the at least one intermediate verification result, for example, the verification result may be a numerical value, and the terminal environment verification result may be obtained by performing weighted summation on the at least one intermediate verification result.
For example, each verification result may be a similarity value, e.g., a cosine similarity value, between the verification information and the corresponding reference verification information. At this time, "processing at least one piece of authentication information and at least one piece of reference authentication information among the plurality of pieces of authentication information" means calculating at least one piece of authentication information and at least one piece of reference authentication information among the plurality of pieces of authentication information to obtain a similarity value.
For example, in the process of comparing the verification information with the reference verification information to calculate the verification result, the verification information may be processed to extract data feature points in the verification information and compared and analyzed with the data feature points in the reference verification information to calculate the verification result.
For example, the at least one reference verification information comprises at least one of the following information: information reflected by a terminal where a normal video shooting flow is operated, video content information reflected by the first video, information corresponding to a target object and the like.
For example, in some embodiments, each authentication information corresponds to a reference authentication information; in other embodiments, each authentication information may correspond to a plurality of reference authentication information, which is not specifically limited by the present disclosure.
For example, in other embodiments, verifying the correspondence between the first video and the target object based on the plurality of verification information and the video data comprises: processing at least one piece of verification information in the plurality of pieces of verification information through a machine learning model to obtain at least one verification result; based on the at least one verification result, a correspondence between the first video and the target object is verified.
For example, the machine learning model may be a neural network model. The machine learning model can analyze and process the verification information by using a Decision tree (Decision Trees) algorithm in a sklern (scimit-lern which is a machine learning library in python) library to obtain a verification result.
Before the machine learning model is used to analyze and process the verification information, the machine learning model may be trained, for example, a large amount of training data (including static information and dynamic information) is collected first, analysis processing is performed based on the training data to form a portrait and marking, and then supervised learning is used to train and learn the machine learning model. The machine learning model is trained through a large amount of data to generate an offline model, which is the user representation.
The machine learning model aggregates multi-dimensional data to perform comprehensive examination and judgment, user portrait, behavior analysis, final feature formation and standard reaching.
For example, in an embodiment of the present disclosure, the video verification method may further include: a plurality of validation information and video data are stored and added to a training data set used to train the machine learning model. For example, the machine learning model may be trained again at intervals (e.g., 3 days, 10 days, etc.) based on the training data in the training data set to update parameters of the machine learning model to further optimize the machine learning model.
It should be noted that each verification result may indicate whether a cheating action exists and/or a probability of the cheating action existing in the process of shooting the first video. For example, when the verification result is a numerical value, the verification result may be in a range of 0 to 1, and for example, if the verification result is closer to 1, it indicates that the cheating behavior is more likely to exist or the probability of the existence of the cheating behavior is higher in the process of capturing the first video; if the verification result is closer to 0, the closer the probability that the cheating behavior is unlikely to exist or exists in the process of shooting the first video is, the lower the probability is. If at least one verification result only has one verification result, the one verification result is a score result, and therefore whether cheating behaviors and cheating probabilities exist in the process of shooting the first video or not can be determined based on the one verification result; in the case where the at least one verification result includes a plurality of verification results, the plurality of verification results need to be considered comprehensively, for example, a weighting process is performed to determine a final score result, and then based on the score result, whether a cheating action and a cheating probability exist in the process of shooting the first video is determined.
For example, verifying the correspondence between the first video and the target object based on the at least one verification result includes: determining at least one weight corresponding to at least one verification result one by one; processing the at least one verification result and the at least one weight to obtain a scoring result; based on the score result, consistency between the first video and the target object is verified.
For example, each of the verification results and the scoring results may be a score. The at least one verification result may be weighted and summed to obtain an intermediate score result, that is, each verification result and its corresponding weight are multiplied to obtain an intermediate verification result, and at least one intermediate verification result corresponding to the at least one verification result is summed to obtain the intermediate score result. For example, the intermediate score result may be processed to obtain the score result. For example, in one example, the intermediate score results may be normalized to obtain normalized score results. For another example, the at least one verification result may be weighted and summed to obtain the score result, that is, each verification result and the corresponding weight are multiplied to obtain an intermediate verification result, and at least one intermediate verification result corresponding to the at least one verification result in a one-to-one correspondence is summed to obtain the score result.
For example, in some embodiments, verifying the correspondence between the first video and the target object based on the scoring results may include: determining that the first video and the target object are consistent when the score result is greater than or equal to a first predetermined threshold; when the score result is less than a second predetermined threshold, it is determined that the first video and the target object are not consistent.
For example, the first predetermined threshold and the second predetermined threshold may be set according to actual situations, which is not limited by the embodiment of the present disclosure. In some examples, if the scoring result is a normalized result, the first predetermined threshold may be 0.8, 0.85, 0.9, etc., and the second predetermined threshold may be 0.4, 0.45, 0.5, etc.
For example, the first predetermined threshold and the second predetermined threshold may or may not be equal. When the first predetermined threshold and the second predetermined threshold are not equal, verifying consistency between the first video and the target object based on the scoring result, which may further include: when the score result is smaller than a first preset threshold and larger than a second preset threshold, acquiring historical information of a user shooting the first video, and determining that the first video and the target object are consistent in response to the fact that the historical information shows that the user does not have cheating behaviors; and determining that the first video and the target object are not consistent in response to the user history information indicating that the user has cheating behavior. For example, "the user has cheating behavior" may indicate that the video that the user uploaded once is not a real video, and/or that the video that the user uploaded once is not consistent with the object information of the uploaded video input by the user, and may also indicate that the user has other cheating behaviors (specifically determined according to actual conditions).
The process of verifying the first video in step S21 is described below with reference to a specific example.
For example, in some embodiments, the first video may be validated based on three aspects, namely, environmental risk identification, behavioral risk identification, and risk identification of tampering with the terminal.
For example, the environmental risk identification is to analyze and compare at least one hardware dynamic information generated in the user operation. The cheating identification can be performed on the whole process of shooting the video by the current user and the environment of the used application program based on the obtained dynamic data of various sensors, the dynamic data rules (hereinafter referred to as normal dynamic data rules) of various sensors reflected by the terminal where the video shooting flow with normal operation is located and the dynamic data rules (hereinafter referred to as abnormal dynamic data rules) of various sensors reflected by the terminal where the video shooting flow with abnormal operation is located, which are defined in advance. The normal dynamic data rule and the abnormal dynamic data rule may be the above-mentioned reference verification information.
For example, in some examples, if the obtained dynamic data of the sensor is angular velocity data of the current terminal in three directions x, y, and z (for example, three coordinate axis directions in a virtual spatial coordinate system), considering that a page where an application program of the terminal is located may be touched, clicked, moved around, and the like by a user when a video is normally captured, a motion amplitude of the terminal capturing the video is large, and during the video capturing process, the angular velocity sensor often outputs angular velocity data, especially when a transition is captured, such as a room switching; then, it may be determined whether the angular velocity data output by the angular velocity sensor is acquired within a preset time interval based on the time in the angular velocity data, thereby performing cheating recognition on the application program of the terminal. For example, in the process of shooting a first video, if angular velocity data of the terminal in three directions of x, y, and z are obtained in the process of shooting a transition switching room, and the change rule presented by the angular velocity data conforms to an action behavior, it can be considered that the angular velocity data indicates that the process of shooting the first video is normal, and at this time, the verification result corresponding to the angular velocity data indicates that no cheating behavior exists in the process of shooting the first video, or the probability that the cheating behavior exists in the process of shooting the first video is low; if the angular velocity data output by the angular velocity sensor of the terminal is not acquired or the abnormal angular velocity data is acquired, the angular velocity data can be considered to indicate that the process of shooting the first video is abnormal, and at the moment, the verification result corresponding to the angular velocity data indicates that cheating exists in the process of shooting the first video or the probability that the cheating exists in the process of shooting the first video is high.
For example, in some examples, in the process of shooting a first video, acceleration data output by an acceleration sensor of a terminal has no change or has a small change amplitude all the time within a preset time interval, it may be considered that the acceleration data indicates that a cheating action exists in the process of shooting the first video, or that a probability of the cheating action existing in the process of shooting the first video is higher, and at this time, a verification result corresponding to the acceleration data indicates that the cheating action exists in the process of shooting the first video, or that the probability of the cheating action existing in the process of shooting the first video is higher; in the process of shooting the first video, acceleration data output by an acceleration sensor of the terminal regularly changes within a preset time interval and is consistent with action content reflected by video content in the first video, and then the acceleration data can be considered to indicate that no cheating behavior exists in the process of shooting the first video, or the probability of the cheating behavior existing in the process of shooting the first video is low, and at the moment, a verification result corresponding to the acceleration data indicates that no cheating behavior exists in the process of shooting the first video, or the probability of the cheating behavior existing in the process of shooting the first video is low. For example, whether the acceleration data changes within a preset time interval and information such as the change amplitude may be determined based on the time in the acceleration data.
For example, in some examples, in the process of shooting a first video, if the position data output by the position sensor of the terminal has not changed or has a small change amplitude all the time within a preset time interval, it may be considered that the position data indicates that a cheating action exists in the process of shooting the first video, or the probability of the cheating action existing in the process of shooting the first video is higher, and at this time, the verification result corresponding to the position data indicates that the cheating action exists in the process of shooting the first video, or the probability of the cheating action existing in the process of shooting the first video is higher; in the process of shooting the first video, the position data output by the position sensor of the terminal regularly changes within a preset time interval, for example, the position information indicated by the position data changes with the position change presented by the video content (i.e., the content of the first video) shot by the user, and it can be considered that the position data indicates that no cheating behavior exists in the process of shooting the first video, or the probability of the cheating behavior existing in the process of shooting the first video is low, and the verification result corresponding to the position data at this time indicates that no cheating behavior exists in the process of shooting the first video, or the probability of the cheating behavior existing in the process of shooting the first video is low. It should be noted that, in this example, the change in the position where the video content shot by the user is presented may represent the video content information reflected by the first video in the reference verification information. For example, whether the position data is changed within a preset time interval and information such as a change amplitude may be determined based on time in the position data.
For example, in some examples, in the process of shooting the first video, an indoor moving track (including a moving distance, a moving angle, and the like) of the user in the house in the process of shooting the house to obtain the first video can be constructed through sensor data output by the pedometer sensor and the gyroscope sensor, and then compared with a house type plane graph (for example, the house type plane graph represents information corresponding to a target object in the reference verification information) of the house, so as to determine whether the first video shot by the user is consistent with the house type of the real house, and further determine whether cheating behaviors and cheating probabilities exist in the process of shooting the first video; in other examples, whether cheating behaviors and cheating probabilities exist in the process of shooting the video of the room can be analyzed by comparing at least one piece of GPS data collected regularly by a sensor on an indoor moving track with GPS data recorded by a user uploading the first video (the GPS data can be set by the user or can be obtained by automatic positioning based on an application program), namely, the position where the first video is shot can be determined based on the at least one piece of GPS data collected regularly by the sensor, namely, the real position of the room shot by the user, and the position where the user is shot can be determined based on the GPS data recorded by the user uploading the first video, so that whether the real position of the room shot by the user is consistent with the position where the user is shot can be judged. For example, the GPS data may represent longitude information and latitude information, and further, for example, the GPS data may also include altitude information.
For example, in some examples, in the process of shooting the first video, the collected light data output by the light sensor and the light in the content of the first video may be compared and analyzed, for example, whether the curve rule of the light change indicated by the light data conforms to the light change characteristics of the user when shooting indoors (for example, whether the change of the data output by the light sensor conforms to an incremental state when the user enters a bright room from a building with dark light, and the light change characteristics of the user when shooting indoors may be referred to as verification information), so as to determine whether the user is actually shooting the first video or recording the shot video, and further determine whether a cheating behavior and a cheating probability exist in the process of shooting the first video.
For example, in some examples, during the process of shooting the first video, network environment information generated when the user shoots the first video may be analyzed, for example, WIFI information (including signal strength, mac address, and the like) connected to the terminal, base station information scanned by the terminal, and the like, and these information (that is, the WIFI information, the base station information, and the like) are used to obtain a WIFI positioning result and a base station positioning result by invoking the remote positioning interface, and then the WIFI positioning result, the base station positioning result, and GPS data of the terminal are analyzed to determine error ranges of the three positioning information, so as to determine a possibility that the position information corresponding to the first video is tampered.
For example, in some examples, in the process of shooting a first video, cheating recognition may be performed on an application program of a terminal by using collected temperature data output by a temperature sensor, and considering that when a user uses the terminal to record a video, the temperature of the terminal may be significantly higher than the temperature of the terminal when the terminal does not perform video shooting, that is, when the first video is shot, the temperature of the terminal may be increased, and if the terminal does not actually shoot the video, the temperature of the terminal may be lower and almost constant, based on which whether cheating behaviors and cheating probabilities exist in the process of shooting the first video may be determined based on whether the temperature data output by the temperature sensor is higher than the temperature when the video is not shot.
For example, for risk identification of tampering with a terminal, information of multiple dimensions may be cross-compared to determine a probability that the terminal is tampered with. The collected hardware static information is compared with the hardware static information of the terminal available in the market, for example, the Android version number of the terminal, the model of the CPU version, the memory information (the memory is 64g, 128g, 256g, or the like) and the like in at least one piece of hardware static information are respectively compared with the corresponding parameters (the Android version number of the terminal available in the market, the model of the CPU version, the memory information and the like) of the terminal available in the market, so that whether the terminal shooting the first video is a normal mobile terminal or a special mobile terminal customized in black products can be determined. For example, if the collected hardware static information is inconsistent with hardware static information of a terminal available on the market, at this time, a hardware identifier determined based on at least one piece of hardware static information is inconsistent with a video identifier, so that it can be determined that the first video is not a real video shot by the video shooting terminal; and if the collected hardware static information is consistent with the hardware static information of the terminal available in the market, at the moment, the hardware identifier determined based on at least one piece of hardware static information is consistent with the video identifier, so that the first video can be determined to be the real video shot by the video shooting terminal.
It should be noted that the above specific example is only illustrative, and cheating recognition is performed on the process of shooting the first video by using other hardware dynamic information, the recognition principle is similar to the above example analysis, and the cheating recognition can be set according to the specific information type, which is not described in detail in this disclosure.
For example, the behavioral risk recognition is to analyze the usage behavior of the user from the viewpoint of the user by representing the usage behavior of the user using the collected authentication information.
For example, by analyzing and judging information such as a method call stack, a page jump record, a behavior log of user operation and the like when a user uses the App, whether the operation flow characteristics of the user in the process of shooting the first video conform to the normal use flow of the App or not, whether multiple abnormal jumps exist or a button is repeatedly and unreasonably clicked or whether multiple abnormal requests exist on an interface or not are judged. For example, in one example, the normal use flow of the App needs to jump from an a page to a B page and then to a C page, and if it is determined that the user directly jumps from the a page to the C page when operating the application program based on the verification information, it indicates that an abnormal jump occurs, so that it may be determined that a cheating behavior may exist or the probability of the cheating behavior existing in the process of shooting the first video is high.
For example, the moving speed and the machine grasping angle of the user when the user takes the first video may be calculated from the sensor data output from each sensor of the terminal, and the truth and reasonableness of the behavior of taking the first video at this time may be analyzed by determining whether the user has an unreasonable shooting angle or habit such as an abnormal moving speed or an abnormal body inclination state during taking the first video.
For example, the information evaluation and scoring module may implement assigning weights to the collected verification information, calculating score results, and ranking risks. For example, when the red warning is reached (the cheating possibility is 95%), the corresponding range of the score result is obtained, when the yellow warning is reached (the cheating possibility is 50%), the corresponding range of the score result is obtained, and the like.
Based on the process of risk identification of tampering the terminal, whether the first video is a real video shot by the terminal can be determined; based on the processes of environmental risk identification and behavior risk identification, at least one verification result can be obtained, then, a weight can be set for each verification result by the information evaluation and scoring module, then, weighting calculation is carried out based on the at least one verification result and the at least one weight so as to obtain a scoring result, and finally, consistency between the first video and the target object is verified based on the scoring result.
In the video verification method provided by the embodiment of the present disclosure, video verification is implemented based on a score result by acquiring a plurality of verification information (e.g., a plurality of static information and a plurality of dynamic information), then comparing and analyzing reference verification information (e.g., information corresponding to a normal operation process, content information of a photographed video itself, etc.) with the acquired verification information to obtain at least one verification result, and determining the score result based on the at least one verification result; for example, a machine learning model may be used to derive scoring results using a precision Trees (DTs) algorithm, thereby enabling video verification based on the scoring results.
It should be noted that in various embodiments of the present disclosure, the flow of the video verification method may include more or less operations, and these operations may be performed sequentially or in parallel. Although the flow of the video verification method described above includes a plurality of operations occurring in a particular order, it should be clearly understood that the order of the plurality of operations is not limited. The above-described manufacturing method may be performed once or may be performed a plurality of times according to a predetermined condition.
Fig. 3 is a schematic block diagram of a video authentication apparatus according to at least one embodiment of the present disclosure.
At least one embodiment of the present disclosure further provides a video verification apparatus, which may be applied to a client, for example, the video verification apparatus may be disposed on a terminal where the client is located.
For example, as shown in fig. 3, the video authentication apparatus 300 may include: an acquisition module 301, an acquisition module 302 and a transmission module 303.
The obtaining module 301 is configured to obtain the shot first video and obtain video data including the first video, that is, the obtaining module 301 is configured to implement step S10 of the video verification method shown in fig. 1, and for the operation performed by the obtaining module 301, reference may be made to the related description of step S10 above, which is not described herein again.
For example, the terminal may include a camera for capturing the first video. The obtaining module 301 may be connected to a camera to obtain the first video.
The capture module 302 is configured to capture a plurality of verification information corresponding to the first video. For example, the plurality of pieces of authentication information include a plurality of pieces of static information including at least one piece of hardware static information corresponding to a terminal that captured the first video, and a plurality of pieces of dynamic information indicating information generated in the process of capturing the first video and collected by the client. The acquiring module 302 is configured to implement step S11 of the video verification method shown in fig. 1, and for the operation performed by the acquiring module 302, reference may be made to the related description of step S11, which is not described herein again.
For example, the client includes an application program, and the first video is obtained by shooting by the terminal executing the application program. In performing the step of capturing the plurality of verification information corresponding to the first video, the capture module 302 is configured to: collecting a plurality of static information; in the process of shooting the first video after the application program is started, a plurality of dynamic information are collected. For example, the at least one hardware dynamic information is acquired based on a predetermined acquisition rule after a client (e.g., an application) is started.
For example, the plurality of pieces of static information further include object information that is information indicating an object in the first video input by a user who captures the first video through a terminal where the client is located.
It should be noted that, for a plurality of static information and a plurality of dynamic information, please refer to the related description in the embodiment of the video verification method shown in fig. 1, and details are not repeated here.
The transmission module 303 is configured to transmit the video data and the plurality of verification information to the server side for the server side to verify the first video based on the plurality of verification information and the video data. The transmission module 303 is configured to implement step S12 of the video verification method shown in fig. 1, and for the operation performed by the transmission module 303, reference may be made to the relevant description of step S12 above, which is not described herein again.
For example, the transmission module 303 may implement data transmission through wireless transmission (e.g., wireless transmission manner such as 3G/4G/5G mobile communication network, bluetooth, zigbee, or WiFi) and/or wired transmission (e.g., wired transmission manner such as twisted pair, coaxial cable, or optical fiber transmission).
For example, in some embodiments, the video data further includes a video identifier corresponding to the first video, and the video verification apparatus 300 may further include an identifier generation module configured to generate the hardware identifier based on the at least one piece of hardware static information; and binding the hardware identification with the first video to take the hardware identification as the video identification.
For example, the acquisition module 301, the acquisition module 302, the transmission module 303, and the identification generation module may be implemented by hardware, software, firmware, and any feasible combination thereof.
In some embodiments of the present disclosure, the acquisition module 301, the acquisition module 302, the transmission module 303, and/or the identity generation module comprise code and programs stored in memory; the processor may execute the code and programs to implement some or all of the functionality of the acquisition module 301, acquisition module 302, transmission module 303, and/or identification generation module described above.
In some embodiments of the present disclosure, the obtaining module 301, the acquiring module 302, the transmitting module 303, and/or the identification generating module may be dedicated hardware devices for implementing some or all of the functions of the obtaining module 301, the acquiring module 302, the transmitting module 303, and/or the identification generating module described above. For example, the acquisition module 301, the acquisition module 302, the transmission module 303, and/or the identification generation module may be one circuit board or a combination of multiple circuit boards for implementing the functions described above. In an embodiment of the present disclosure, the one or a combination of the plurality of circuit boards may include: (1) one or more processors; (2) One or more non-transitory computer-readable memories connected to the processor; and (3) firmware stored in the memory executable by the processor.
For example, the detailed description of the process of the video verification method performed by the video verification apparatus 300 can refer to the related description in the embodiment of the video verification method applied to the terminal, and the repeated parts are not repeated.
Fig. 4 is a schematic block diagram of another video authentication apparatus provided in at least one embodiment of the present disclosure.
At least one embodiment of the present disclosure further provides a video authentication apparatus, which may be applied to a server side, for example, the video authentication apparatus may be disposed on the server side.
For example, as shown in fig. 4, the video authentication apparatus 400 may include a receiving module 401 and an authentication module 402.
For example, the receiving module 401 is configured to receive video data including a first video and a plurality of authentication information corresponding to the first video, which are transmitted from a client. For example, the plurality of pieces of authentication information include a plurality of pieces of static information including at least one piece of hardware static information corresponding to a video shooting terminal that shoots the first video, and a plurality of pieces of dynamic information indicating information collected in a process of shooting the first video. The receiving module 401 is configured to implement step S20 of the video verification method shown in fig. 2, and for the operation performed by the receiving module 401, reference may be made to the relevant description of step S20 above, which is not described again here.
For example, the receiving module 401 may implement data receiving in a wireless and/or wired manner.
For example, the verification module 402 is configured to verify the first video based on the plurality of verification information and the video data. The verification module 402 is configured to implement step S21 of the video verification method shown in fig. 2, and for the operation performed by the verification module 402, reference may be made to the related description of step S21, which is not described herein again.
For example, the client is located in a video shooting terminal or a terminal different from the video shooting terminal, and the video data further includes a video identifier corresponding to the first video. In some embodiments, in performing the step of authenticating the first video based on the plurality of authentication information and the video data, the authentication module 402 is configured to: determining a hardware identification based on the at least one hardware static information; acquiring a video identifier in video data; and in response to the hardware identification and the video identification being consistent, determining the first video as a real video shot by the video shooting terminal.
For example, in some embodiments, in performing the step of authenticating the first video based on the plurality of authentication information and the video data, the authentication module 402 is configured to: acquiring a target type and a target number, wherein the target type represents a preset information type of verification information required to be acquired in the process of shooting the first video, and the target number represents a preset number of verification information required to be acquired in the process of shooting the first video; in response to the types of the plurality of pieces of authentication information being identical to the target type and the number of the plurality of pieces of authentication information being identical to the target number, determining the first video as a real video captured by the video capturing terminal.
For example, the plurality of pieces of static information further include object information that is information indicating an object in the first video input by a user who captures the first video through a terminal where the client is located. In some embodiments, in performing the step of authenticating the first video based on the plurality of authentication information and the video data, the authentication module 402 is further configured to: determining a target object based on the object information; the consistency between the first video and the target object is verified based on the plurality of verification information and the video data.
For example, the target object is an internal space and/or an external space of a building, or the like.
For example, in some embodiments, in performing the step of verifying the correspondence between the first video and the target object based on the plurality of verification information and the video data, the verification module 402 is configured to: obtaining at least one piece of reference verification information, wherein the at least one piece of reference verification information comprises at least one of the following information: information reflected by a terminal where a normal video shooting process is operated, video content information reflected by the first video and information corresponding to the target object; processing at least one piece of verification information in the plurality of pieces of verification information and at least one piece of reference verification information to obtain at least one verification result, wherein each verification result is a verification result corresponding to at least one piece of verification information; and verifying the consistency between the first video and the target object based on at least one verification result.
For example, in further embodiments, in performing the step of verifying the correspondence between the first video and the target object based on the plurality of verification information and the video data, the verification module 402 is configured to: processing at least one piece of verification information in the plurality of pieces of verification information through a machine learning model to obtain at least one verification result; and verifying the consistency between the first video and the target object based on at least one verification result.
For example, in performing the step of verifying the correspondence between the first video and the target object based on the at least one verification result, the verification module 402 is configured to: determining at least one weight corresponding to at least one verification result one by one; processing the at least one verification result and the at least one weight to obtain a scoring result; based on the scoring results, the consistency between the first video and the target object is verified.
For example, in performing the step of verifying the correspondence between the first video and the target object based on the score result, the verification module 402 is configured to: determining that the first video and the target object are consistent when the score result is greater than or equal to a first predetermined threshold; determining that the first video and the target object are not consistent when the score result is less than a second predetermined threshold; when the score result is smaller than a first preset threshold value and larger than a second preset threshold value, acquiring historical information of a user shooting the first video, and determining that the first video and the target object are consistent in response to the fact that the historical information shows that the user does not have cheating behaviors; and determining that the first video and the target object are not consistent in response to the user history information indicating that the user has cheating behavior.
For example, the receiving module 401 and the verifying module 402 may be implemented by hardware, software, firmware, and any feasible combination thereof.
In some embodiments of the present disclosure, the receiving module 401 and/or the verifying module 402 comprise code and programs stored in a memory; the processor may execute the code and programs to implement some or all of the functionality of the receiving module 401 and/or the verifying module 402 as described above.
In some embodiments of the present disclosure, the receiving module 401 and/or the verifying module 402 may be dedicated hardware devices for implementing some or all of the functionality of the receiving module 401 and/or the verifying module 402 as described above. For example, the receiving module 401 and/or the verification module 402 may be a circuit board or a combination of circuit boards for implementing the functions described above. In an embodiment of the present disclosure, the one or a combination of the plurality of circuit boards may include: (1) one or more processors; (2) One or more non-transitory computer-readable memories connected to the processor; and (3) firmware stored in the memory executable by the processor.
For example, the detailed description of the process of the video verification method performed by the video verification apparatus 400 can refer to the related description applied to the embodiment of the video verification method at the server side, and the repeated parts are not repeated.
Fig. 5 is a schematic block diagram of a video verification system according to at least one embodiment of the present disclosure.
At least one embodiment of the present disclosure further provides a video verification system, as shown in fig. 5, the video verification system 500 includes: client 501 and server 502. The video verification system provided by the embodiment of the disclosure can be applied to an internet system, and the client 501 and the server 502 are connected through a network.
For example, the client 501 is configured to: acquiring a shot first video to obtain video data comprising the first video; acquiring a plurality of pieces of verification information corresponding to a first video, wherein the plurality of pieces of verification information comprise a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information comprise at least one piece of hardware static information corresponding to a terminal for shooting the first video, and the plurality of pieces of dynamic information represent information generated in the process of shooting the first video and acquired by a client; the video data and the plurality of authentication information are sent to the server 502.
For example, server-side 502 is configured to: receiving video data and a plurality of verification information; the first video is authenticated based on the plurality of authentication information and the video data.
For example, the client 501 may be a client in the embodiment of the video verification method shown in fig. 1 or a client in the embodiment of the video verification method shown in fig. 2, and the server 502 may be a server described in any of the above embodiments. For example, for the detailed description of the operation performed by the client 501 and the detailed description of the operation performed by the server 5021, reference may be made to the related description in the above embodiment of the video verification method, and repeated descriptions are omitted.
Fig. 6 is a flowchart of a video verification method according to an embodiment of the present disclosure.
A specific process of performing video verification by the video verification system provided by the embodiment of the present disclosure is described below with reference to fig. 6.
As shown in fig. 6, first, the user starts an application in the terminal. For example, when an application is started, static information may be collected, where collecting static information includes collecting hardware static information and collecting software static information, and the hardware static information may include one or more of the following pieces of information: IMEI of the terminal, mac address of the terminal, root information of the terminal, ID of the terminal, vendor information of Rom of the terminal, etc., and the software static information may include one or more of the following respective information: version number of the application program, debugging state, debug state, packet name, etc.
Then, the user takes the first video through the terminal. Dynamic information can be collected in the process of shooting the first video, and the collection of the dynamic information comprises collection of hardware dynamic information and collection of software dynamic information. The hardware dynamic information includes one or more of the following: sensor information, network information, device hardware information, and external device information. The sensor information may include data output by a dynamic sensor, data output by a position sensor, data output by an environment sensor, and the like, for example, the dynamic sensor may include an acceleration sensor, a gyro sensor, a gravity sensor, an angular velocity sensor, a rotation vector sensor, a pedometer sensor, and the like; the position sensor may include a magnetic force sensor, a proximity sensor, a magnetic field sensor, a direction sensor, a geomagnetic rotation vector sensor, and the like; the environmental sensor may include an illumination sensor, a temperature and humidity sensor, a pressure sensor, and the like. The network information may include one or more of the following: WIFI hotspot information, mobile network base station information, network traffic use information and the like. The device hardware information may include one or more of the following: GPS information, battery change status information, CPU/memory usage information, etc. The external device information may include one or more of the following pieces of information: bluetooth device information, external storage device information, other external hardware device information, etc. The software dynamic information may include one or more of the following: a method stack, whether a background runs a program, hook risks, software resource use conditions and the like.
For example, the acquisition rules may be configured in advance for the static information and the dynamic information, so that the information is acquired based on the acquisition rules. For example, static information and dynamic information may be stored in the collection container.
The user then submits video data including the first video, while the captured static and dynamic information may be submitted. Firstly, the static information and the dynamic information can be encrypted, and then, the video data and the encrypted static information and dynamic information are packaged and then transmitted to the server side, so that the server side can perform data processing processes such as data analysis and identification on the video data, the static information and the dynamic information. For example, the data processing process may include: verifying the integrity of the verification information; if the verification information is complete, comparing the verification information with the reference verification information to obtain a verification result corresponding to at least one piece of verification information; then, based on the at least one verification result, a scoring result is obtained. And finally, packaging the video data and the score result corresponding to the video data and uploading the packaged video data and the score result to an operation platform for the staff to review.
It should be noted that the video data may also be encrypted and then transmitted to the server.
Fig. 7 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure.
For example, as shown in fig. 7, an electronic device 700 may include a memory 701 and a processor 702. It should be noted that the components of the electronic device 700 shown in fig. 7 are only exemplary and not limiting, and the electronic device 700 may have other components according to the actual application.
For example, in some embodiments, the electronic device 700 may comprise a client in an embodiment of the video verification method shown in fig. 1. For example, memory 701 is used to non-temporarily store computer readable instructions; the computer readable instructions, when executed by the processor 702, implement one or more steps of a video authentication method applied to a client according to any of the embodiments described above.
For example, in other embodiments, the electronic device 700 may include the server side of any of the embodiments described above. For example, memory 701 is used to non-temporarily store computer readable instructions; the computer readable instructions, when executed by the processor 702, implement one or more steps of the video authentication method applied to the server side according to any of the above embodiments.
For example, components such as the processor 702 and the memory 701 may communicate over a network connection. The network may include a wireless network, a wired network, and/or any combination of wireless and wired networks. The network may include a local area network, the Internet, a telecommunications network, an Internet of Things (Internet of Things) based on the Internet and/or a telecommunications network, and/or any combination thereof, and/or the like. The wired network may communicate by using twisted pair, coaxial cable, or optical fiber transmission, for example, and the wireless network may communicate by using 3G/4G/5G mobile communication network, bluetooth, zigbee, or WiFi, for example. The present disclosure is not limited herein as to the type and function of the network.
For example, the processor 702 may control other components in the electronic device 700 to perform desired functions. The processor 702 may be a device having data processing capability and/or program execution capability, such as a Central Processing Unit (CPU), tensor Processor (TPU), or Graphics Processing Unit (GPU). The Central Processing Unit (CPU) may be an X86 or ARM architecture, etc. The GPU may be separately integrated directly onto the motherboard, or built into the north bridge chip of the motherboard. The GPU may also be built into the Central Processing Unit (CPU).
For example, memory 701 may include any combination of one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), a hard disk, an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer readable instructions may be stored on the computer readable storage medium and executed by processor 502 to implement various functions of electronic device 700. Various application programs and various data and the like can also be stored in the storage medium.
It should be noted that the electronic device 700 provided in the embodiment of the present disclosure may adopt an Android (Android) system, an IOS system, a hong meng (Harmony) system, or the like.
For example, for a detailed description of a process of the electronic device 700 executing the video verification method, reference may be made to the related description of the video verification method described in any of the above embodiments, and repeated descriptions are omitted here.
Fig. 8 is a schematic block diagram of a non-transitory computer-readable storage medium provided by an embodiment of the present disclosure.
For example, as shown in fig. 8, one or more computer-readable instructions 801 may be stored non-transitory on a non-transitory computer-readable storage medium 800. For example, the computer readable instructions 801, when executed by a processor, may implement one or more steps of the video authentication method applied to the client side in any of the above embodiments or one or more steps of the video authentication method applied to the server side in any of the above embodiments.
For example, the non-transitory computer-readable storage medium 800 may be applied in the electronic device 700 described above, which may include the memory 701 in the electronic device 700, for example.
For example, the description of the non-transitory computer-readable storage medium 800 may refer to the description of the memory 701 in the embodiment of the electronic device 700, and repeated descriptions are omitted.
For the present disclosure, there are also several points to be explained:
(1) The drawings of the embodiments of the disclosure only relate to the structures related to the embodiments of the disclosure, and other structures can refer to general designs.
(2) Thicknesses and dimensions of layers or structures may be exaggerated in the drawings used to describe embodiments of the present invention for clarity. It will be understood that when an element such as a layer, film, region or substrate is referred to as being "on" or "under" another element, it can be "directly on" or "under" the other element or intervening elements may be present.
(3) Without conflict, embodiments of the present disclosure and features of the embodiments may be combined with each other to arrive at new embodiments.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and the scope of the present disclosure should be subject to the scope of the claims.

Claims (21)

1. A video verification method is applied to a client, wherein the method comprises the following steps:
acquiring a shot first video to obtain video data comprising the first video;
acquiring a plurality of pieces of authentication information corresponding to the first video, wherein the plurality of pieces of authentication information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a terminal which shoots the first video, and the plurality of pieces of dynamic information represent information which is acquired by the client and generated in the process of shooting the first video;
and sending the video data and the verification information to a server side so that the server side can verify the first video based on the verification information and the video data.
2. The method of claim 1, wherein the video data further comprises a video identification corresponding to the first video, the method further comprising:
generating a hardware identification based on the at least one hardware static information;
binding the hardware identification with the first video to take the hardware identification as the video identification.
3. The method according to claim 1, wherein the client includes an application program, the first video is obtained by the terminal executing the application program for shooting,
collecting a plurality of verification information corresponding to the first video, including:
collecting the plurality of static information;
and acquiring the plurality of dynamic information in the process of shooting the first video after the application program is started.
4. The method of claim 3, wherein the plurality of static information further includes at least one software static information corresponding to the application,
the at least one piece of hardware static information includes one or more of the following pieces of information: the terminal comprises an international mobile equipment identity code of the terminal, a media access control address of the terminal, a model of a central processing unit of the terminal, root information of the terminal, a model of a main board of the terminal, an equipment identity number of the terminal and manufacturer information of a read-only memory of the terminal;
the at least one piece of software static information includes one or more of the following pieces of information: the method comprises the following steps of setting the version number of the application program, debugging information, read-only memory information, root information of the application program and a package name.
5. The method of claim 3, wherein the plurality of dynamic information includes at least one hardware dynamic information and at least one software dynamic information,
the at least one piece of hardware dynamic information is dynamic information generated by the terminal and collected in the process of shooting the first video, and the at least one piece of hardware dynamic information comprises one or more of the following pieces of information: the method comprises the following steps of sensor information, network information, equipment hardware information and external equipment information, wherein the sensor information comprises one or more of the following data: the device comprises an acceleration sensor, a gyroscope sensor, a gravity sensor, a rotation vector sensor, a pedometer sensor, a direction sensor, a geomagnetic rotation vector sensor and an illumination sensor, wherein the network information comprises wireless hotspot information and mobile network base station information, the device hardware information comprises GPS information, the external device information comprises Bluetooth device information and external storage device information,
the at least one piece of software dynamic information is dynamic information which is acquired in the process of shooting the first video and is generated when a user shooting the first video uses the application program, and the at least one piece of software dynamic information comprises one or more of the following pieces of information: internet protocol address information, risk plug-in, memory information, terminal bandwidth load, method stack, hook risk, behavior log, and software error reporting information log.
6. The method of claim 5, wherein the at least one hardware dynamic information is collected based on a predetermined collection rule after the client is started.
7. The method according to any one of claims 1-6, wherein the plurality of static information further comprises object information, and the object information is information representing an object in the first video input by a user who takes the first video through a terminal where the client is located.
8. A video verification method is applied to a server side, wherein the method comprises the following steps:
receiving video data including a first video and a plurality of pieces of verification information corresponding to the first video, wherein the plurality of pieces of verification information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a video shooting terminal that shoots the first video, and the plurality of pieces of dynamic information represent information acquired in a process of shooting the first video;
validating the first video based on the plurality of validation information and the video data.
9. The method of claim 8, wherein the client is located in the video capture terminal or in a terminal different from the video capture terminal, the video data further includes a video identification corresponding to the first video,
validating the first video based on the plurality of validation information and the video data, comprising:
determining a hardware identification based on the at least one hardware static information;
acquiring a video identifier in the video data;
and in response to the hardware identification and the video identification being consistent, determining the first video to be a real video shot by the video shooting terminal.
10. The method of claim 8, wherein validating the first video based on the plurality of validation information and the video data comprises:
acquiring a target type and a target number, wherein the target type represents a preset information type of verification information required to be acquired in the process of shooting the first video, and the target number represents a preset number of verification information required to be acquired in the process of shooting the first video;
determining the first video to be a real video captured by the video capturing terminal in response to the types of the plurality of authentication information being identical to the target type and the number of the plurality of authentication information being the same as the target number.
11. The method according to claim 9, wherein the plurality of static information further includes object information, the object information being information representing an object in the first video input by a user who captured the first video through a terminal where the client is located,
validating the first video based on the plurality of validation information and the video data, further comprising:
determining a target object based on the object information;
verifying consistency between the first video and the target object based on the plurality of verification information and the video data.
12. The method of claim 11, wherein verifying the conformance between the first video and the target object based on the plurality of verification information and the video data comprises:
obtaining at least one piece of reference verification information, wherein the at least one piece of reference verification information comprises at least one of the following information: information reflected by a terminal where a normal video shooting flow is operated, video content information reflected by the first video, and information corresponding to the target object,
processing at least one piece of verification information in the plurality of pieces of verification information and the at least one piece of reference verification information to obtain at least one verification result, wherein each verification result is a verification result corresponding to at least one piece of verification information;
verifying a correspondence between the first video and the target object based on the at least one verification result.
13. The method of claim 11, wherein verifying the conformance between the first video and the target object based on the plurality of verification information and the video data comprises:
processing at least one piece of verification information in the plurality of pieces of verification information through a machine learning model to obtain at least one verification result;
verifying consistency between the first video and the target object based on the at least one verification result.
14. The method of claim 12 or 13, wherein verifying the correspondence between the first video and the target object based on the at least one verification result comprises:
determining at least one weight corresponding to the at least one verification result one by one;
processing the at least one verification result and the at least one weight to obtain a scoring result;
verifying consistency between the first video and the target object based on the scoring result.
15. The method of claim 13, wherein verifying the correspondence between the first video and the target object based on the scoring results comprises:
determining that the first video and the target object are consistent when the score result is greater than or equal to a first predetermined threshold;
determining that the first video and the target object are not consistent when the score result is less than a second predetermined threshold;
when the score result is smaller than the first preset threshold and larger than the second preset threshold, acquiring historical information of a user shooting the first video, and determining that the first video and the target object are consistent in response to the historical information indicating that the user does not have cheating behaviors; and determining that the first video and the target object are not consistent in response to the user history information indicating that the user has cheating behavior.
16. The method according to any one of claims 11-13, wherein the target object is an interior space and/or an exterior space of a building.
17. A video verification device applied to a client, wherein the video verification device comprises:
the acquisition module is configured to acquire a shot first video to obtain video data comprising the first video;
the acquisition module is configured to acquire a plurality of pieces of authentication information corresponding to the first video, wherein the plurality of pieces of authentication information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a terminal which shoots the first video, and the plurality of pieces of dynamic information represent information which is acquired by the client and generated in the process of shooting the first video;
the transmission module is configured to send the video data and the plurality of verification information to a server side, so that the server side can verify the first video based on the plurality of verification information and the video data.
18. A video verification device applied to a server side, wherein the video verification device comprises:
the system comprises a receiving module, a processing module and a display module, wherein the receiving module is configured to receive video data comprising a first video and a plurality of pieces of verification information corresponding to the first video, the video data being transmitted from a client, the plurality of pieces of verification information comprising a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information comprising at least one piece of hardware static information corresponding to a video shooting terminal shooting the first video, and the plurality of pieces of dynamic information representing information acquired in the process of shooting the first video;
a verification module configured to verify the first video based on the plurality of verification information and the video data.
19. An electronic device, comprising:
a processor; and
memory, wherein the memory has stored therein computer readable instructions, which when executed by the processor, implement the method of any of claims 1-7 or the method of any of claims 8-16.
20. A video verification system comprising: a client-side and a server-side,
wherein the client is configured to:
acquiring a shot first video to obtain video data comprising the first video;
acquiring a plurality of pieces of authentication information corresponding to the first video, wherein the plurality of pieces of authentication information include a plurality of pieces of static information and a plurality of pieces of dynamic information, the plurality of pieces of static information include at least one piece of hardware static information corresponding to a terminal which shoots the first video, and the plurality of pieces of dynamic information represent information which is acquired by the client and generated in the process of shooting the first video;
sending the video data and the plurality of verification information to the server side;
the server side is configured to:
receiving the video data and the plurality of authentication information;
validating the first video based on the plurality of validation information and the video data.
21. A non-transitory computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the method of any one of claims 1-7 or the method of any one of claims 8-16.
CN202110914275.6A 2021-08-10 2021-08-10 Video verification method, device and system, electronic equipment and storage medium Pending CN115708116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110914275.6A CN115708116A (en) 2021-08-10 2021-08-10 Video verification method, device and system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110914275.6A CN115708116A (en) 2021-08-10 2021-08-10 Video verification method, device and system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115708116A true CN115708116A (en) 2023-02-21

Family

ID=85212282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110914275.6A Pending CN115708116A (en) 2021-08-10 2021-08-10 Video verification method, device and system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115708116A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106471795A (en) * 2014-05-12 2017-03-01 飞利浦灯具控股公司 Checking using the image of the timestamp capture always being decoded from the illumination of modulated light source
RU2016147412A (en) * 2016-12-05 2018-06-05 Общество с ограниченной ответственностью "Новые страховые технологии" Method for recording and authenticating recorded video data
CN111200741A (en) * 2020-04-02 2020-05-26 上海商魁信息科技有限公司 Video processing method and device and machine-readable storage medium
CN111310136A (en) * 2020-02-26 2020-06-19 支付宝(杭州)信息技术有限公司 Authenticity verification method, device and equipment for image data
CN112055229A (en) * 2020-08-18 2020-12-08 泰康保险集团股份有限公司 Video authentication method and device
CN112333165A (en) * 2020-10-27 2021-02-05 支付宝(杭州)信息技术有限公司 Identity authentication method, device, equipment and system
CN112651841A (en) * 2020-12-18 2021-04-13 中国平安人寿保险股份有限公司 Online business handling method and device, server and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106471795A (en) * 2014-05-12 2017-03-01 飞利浦灯具控股公司 Checking using the image of the timestamp capture always being decoded from the illumination of modulated light source
RU2016147412A (en) * 2016-12-05 2018-06-05 Общество с ограниченной ответственностью "Новые страховые технологии" Method for recording and authenticating recorded video data
CN111310136A (en) * 2020-02-26 2020-06-19 支付宝(杭州)信息技术有限公司 Authenticity verification method, device and equipment for image data
CN111200741A (en) * 2020-04-02 2020-05-26 上海商魁信息科技有限公司 Video processing method and device and machine-readable storage medium
CN112055229A (en) * 2020-08-18 2020-12-08 泰康保险集团股份有限公司 Video authentication method and device
CN112333165A (en) * 2020-10-27 2021-02-05 支付宝(杭州)信息技术有限公司 Identity authentication method, device, equipment and system
CN112651841A (en) * 2020-12-18 2021-04-13 中国平安人寿保险股份有限公司 Online business handling method and device, server and computer readable storage medium

Similar Documents

Publication Publication Date Title
US10699195B2 (en) Training of artificial neural networks using safe mutations based on output gradients
US20210327151A1 (en) Visual display systems and method for manipulating images of a real scene using augmented reality
WO2018228218A1 (en) Identification method, computing device, and storage medium
US10599975B2 (en) Scalable parameter encoding of artificial neural networks obtained via an evolutionary process
CN111523413B (en) Method and device for generating face image
CN111159474B (en) Multi-line evidence obtaining method, device and equipment based on block chain and storage medium
CN111639968B (en) Track data processing method, track data processing device, computer equipment and storage medium
CN110737798B (en) Indoor inspection method and related product
CN112698848A (en) Downloading method and device of machine learning model, terminal and storage medium
CN110264093B (en) Credit model establishing method, device, equipment and readable storage medium
CN110881050A (en) Security threat detection method and related product
JP2018026815A (en) Video call quality measurement method and system
CN115049057B (en) Model deployment method and device, electronic equipment and storage medium
TWI793418B (en) Image processing method and system
CN114780868A (en) Method and system for generating virtual avatar by user tag of metauniverse
CN112333165B (en) Identity authentication method, device, equipment and system
US20210158501A1 (en) Recommendation engine for comparing physical activity to ground truth
CN114422271A (en) Data processing method, device, equipment and readable storage medium
KR20200093910A (en) Method for providing data assocatied with original data, electronic device and storage medium therefor
US10154080B2 (en) Enhancing digital content provided from devices
CN110519269B (en) Verification method, device and system for image-text click data and mobile terminal
CN115708116A (en) Video verification method, device and system, electronic equipment and storage medium
CN116798129A (en) Living body detection method and device, storage medium and electronic equipment
CN111939556A (en) Method, device and system for detecting abnormal operation of game
CN105162799A (en) Method for checking whether client is legal mobile terminal or not and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230221