CN110661805B - Data processing method, device, storage medium and client - Google Patents

Data processing method, device, storage medium and client Download PDF

Info

Publication number
CN110661805B
CN110661805B CN201910932085.XA CN201910932085A CN110661805B CN 110661805 B CN110661805 B CN 110661805B CN 201910932085 A CN201910932085 A CN 201910932085A CN 110661805 B CN110661805 B CN 110661805B
Authority
CN
China
Prior art keywords
attribute information
sound signal
data
physical environment
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910932085.XA
Other languages
Chinese (zh)
Other versions
CN110661805A (en
Inventor
姜世琦
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sasi Digital Technology Beijing Co ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201910932085.XA priority Critical patent/CN110661805B/en
Publication of CN110661805A publication Critical patent/CN110661805A/en
Application granted granted Critical
Publication of CN110661805B publication Critical patent/CN110661805B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/12Applying verification of the received information
    • H04L63/123Applying verification of the received information received data contents, e.g. message integrity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0866Generation of secret information including derivation or calculation of cryptographic keys or passwords involving user or device identifiers, e.g. serial number, physical or biometrical information, DNA, hand-signature or measurable physical characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Storage Device Security (AREA)

Abstract

The embodiment of the specification provides a data processing method, a data processing device, a storage medium and a client. The method comprises the following steps: playing a sound signal with a set frequency into a physical environment, wherein the sound signal comprises attribute information, and the attribute information is information used for describing a physical scene in the anchoring process of the physical scene; collecting physical environment data in the physical environment, wherein the physical environment data comprises sound signals with the set frequency; and uploading the physical environment data to a data platform. In the technical solution provided in the embodiment of the present specification, the attribute information is recorded in the physical environment data in the form of a sound signal, and the physical scene anchoring process is completed by the attribute information extracted from the sound signal, so that the attribute information is prevented from being forged, tampered and replaced in the physical scene anchoring process.

Description

Data processing method, device, storage medium and client
[ technical field ] A method for producing a semiconductor device
The embodiment of the specification relates to the technical field of internet, in particular to a data processing method, a data processing device, a storage medium and a client.
[ background of the invention ]
The problem of anchoring physical scenes is one of The important problems in The field of The Internet of Things (IOT for short). The physical scene anchoring means recording a certain off-line physical scene and verifying important attribute information of the physical scene, such as time, place, and the like, whether the scene is forged, tampered, and replaced.
In the prior art, when recording a physical scene under a line, attribute information is recorded separately, so that the attribute information is easy to forge, tamper and replace.
[ summary of the invention ]
In view of this, embodiments of the present specification provide a data processing method, an apparatus, a storage medium, and a client, which are used to avoid attribute information from being forged, tampered, and replaced in a physical scene anchoring process.
In one aspect, an embodiment of the present specification provides a data processing method, including:
playing a sound signal with a set frequency into a physical environment, wherein the sound signal comprises attribute information, and the attribute information is information used for describing a physical scene in the anchoring process of the physical scene;
collecting physical environment data in the physical environment, wherein the physical environment data comprises sound signals with the set frequency;
and uploading the physical environment data to a data platform.
Optionally, the sound signal comprises an ultrasonic signal.
Optionally, before playing the sound signal with the set frequency into the physical environment, the method further includes:
encrypting the acquired attribute information to generate encrypted information;
coding the encrypted information according to a preset sound coding rule to generate the sound signal;
setting the frequency of the sound signal to the set frequency.
Optionally, the attribute information includes one or any combination of device identification information, time information, and geographic location information.
Optionally, the physical environment data comprises video data or audio data.
Optionally, the data platform comprises a distributed storage platform or a blockchain platform.
In another aspect, an embodiment of the present specification provides a data processing method, including:
acquiring physical environment data from a data platform;
extracting sound signals with set frequency from the physical environment data, wherein the sound signals comprise attribute information, and the attribute information is information used for describing a physical scene in the anchoring process of the physical scene;
extracting the attribute information from the sound signal;
and comparing whether the extracted attribute information is consistent with the attribute information acquired in advance, and generating a verification result.
Optionally, the sound signal comprises an ultrasonic signal.
Optionally the extracting the attribute information from the sound signal comprises:
decoding the sound signal according to a preset sound decoding rule to generate encrypted information;
and decrypting the encrypted information to generate the extracted attribute information.
Optionally, the attribute information includes one or any combination of device identification information, time information, and geographic location information.
Optionally, the physical environment data comprises video data or audio data.
Optionally, the data platform comprises a distributed storage platform or a blockchain platform.
In another aspect, an embodiment of the present specification provides a data processing method, including:
a first client plays a sound signal with set frequency to a physical environment, wherein the sound signal comprises attribute information, and the attribute information is information used for describing a physical scene in the anchoring process of the physical scene; collecting physical environment data in the physical environment, wherein the physical environment data comprises sound signals with the set frequency; uploading the physical environment data to a data platform;
the method comprises the steps that a first client or a second client obtains physical environment data from a data platform; extracting a sound signal of the set frequency from the physical environment data; extracting the attribute information from the sound signal; and comparing whether the extracted attribute information is consistent with the attribute information acquired in advance, and generating a verification result.
In another aspect, an embodiment of the present specification provides a data processing apparatus, including:
the system comprises a playing module, a processing module and a processing module, wherein the playing module is used for playing a sound signal with a set frequency to a physical environment, the sound signal comprises attribute information, and the attribute information is information used for describing a physical scene in the anchoring process of the physical scene;
the acquisition module is used for acquiring physical environment data in the physical environment, wherein the physical environment data comprises a sound signal with the set frequency;
and the sending module is used for uploading the physical environment data to a data platform.
Optionally, the method further comprises:
the encryption module is used for encrypting the acquired attribute information to generate encrypted information;
the coding module is used for coding the encrypted information according to a preset sound coding rule to generate the sound signal;
and the setting module is used for setting the frequency of the sound signal to be the set frequency.
In another aspect, an embodiment of the present specification provides a data processing apparatus, including:
the acquisition module is used for acquiring physical environment data from the data platform;
a first extraction module, configured to extract a sound signal with the set frequency from the physical environment data, where the sound signal includes attribute information, and the attribute information is information used to describe a physical scene in a physical scene anchoring process;
the second extraction module is further used for extracting the attribute information from the sound signal;
and the comparison module is used for comparing whether the extracted attribute information is consistent with the attribute information acquired in advance or not and generating a verification result.
Optionally, the second extraction module includes:
the decoding submodule is used for decoding the sound signal according to a preset sound decoding rule to generate encrypted information;
and the decryption submodule is used for decrypting the encrypted information to generate the extracted attribute information.
On the other hand, the embodiment of the present specification provides a storage medium, the storage medium includes a stored program, and when the program runs, the device where the storage medium is located is controlled to execute the data processing method.
On the other hand, the embodiment of the present specification provides a storage medium, the storage medium includes a stored program, and when the program runs, the device where the storage medium is located is controlled to execute the data processing method.
In another aspect, the present specification provides a client, including a memory for storing information including program instructions and a processor for controlling execution of the program instructions, wherein the program instructions are loaded by the processor and executed to implement the steps of the data processing method.
In the technical solution provided in this specification, a sound signal with a set frequency is played in a physical environment so that acquired physical environment data includes the sound signal with the set frequency, the sound signal with the set frequency is extracted from the physical environment data, attribute information is extracted from the sound signal, a verification result is generated by comparing whether the extracted attribute information is consistent with attribute information acquired in advance, the attribute information is recorded in the physical environment data in the form of the sound signal, and a physical scene anchoring process is completed through the attribute information extracted from the sound signal, so that the attribute information is prevented from being forged, tampered and replaced in the physical scene anchoring process.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without any creative effort.
FIG. 1 is a block diagram of an architectural framework for a data processing system in an embodiment of the present description;
FIG. 2 is a block diagram of an alternative architectural framework for a data processing system in an embodiment of the present description;
fig. 3 is a flowchart of a data processing method according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating a data processing method according to an embodiment of the present disclosure;
FIG. 5 is a flow chart of generating an acoustic signal in an embodiment of the present description;
FIG. 6 is a flow chart of another data processing method provided in an embodiment of the present description;
FIG. 7 is another schematic diagram of a data processing method in an embodiment of the present disclosure;
FIG. 8 is a flowchart of extracting attribute information in an embodiment of the present specification;
fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of another data processing apparatus provided in an embodiment of the present specification;
fig. 11 is a schematic diagram of a client provided in an embodiment of the present specification.
[ detailed description ] A
For better understanding of the technical solutions in the present specification, the following detailed description of the embodiments of the present specification is provided with reference to the accompanying drawings.
It should be understood that the embodiments described are only a few embodiments of the present specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present specification.
The terminology used in the embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the specification. As used in the specification examples and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 is a schematic structural framework diagram of a data processing system in an embodiment of the present specification, and as shown in fig. 1, the data processing system includes: the system comprises a first client, a second client and a data platform, wherein the first client and the second client are respectively in communication connection with the data platform. As shown in fig. 1, the first client includes a playing module and a capturing module, for example, the playing module may include a speaker, the capturing module may include a microphone device, and optionally, the capturing module may further include a camera device, where the speaker is configured to play a sound signal, the microphone device is configured to capture an audio portion in the physical environment data, the camera device is configured to capture an image portion in the physical environment data, and then the first client combines the audio portion and the image portion to generate video data, or the first client uses the audio portion as audio data to achieve capturing of the audio data. When the data processing system in fig. 1 is applied to law enforcement forensics, the first client may be a law enforcement recorder or a terminal device, where the terminal device may include a mobile phone, a tablet computer, a laptop computer or a wearable device. In an embodiment of the present specification, the second client may include a mobile phone, a tablet computer, a notebook computer, a personal computer, or a wearable device. As shown in fig. 1, the data platform may comprise a distributed storage platform or a blockchain platform. When the data platform comprises a block chain platform, the first client and the second client can be both block chain nodes in a block chain network.
Fig. 2 is a schematic diagram of another structural framework of the data processing system in the embodiment of the present disclosure, and as shown in fig. 2, the data processing system differs from the data processing system in fig. 1 in that the data processing system in fig. 2 only includes a first client and a data platform, and does not include a second client. As shown in fig. 2, the first client includes a playing module and a capturing module, for example, the playing module may include a speaker, the capturing module may include a microphone device, and optionally, the capturing module may further include a camera device, where the speaker is configured to play a sound signal, the microphone device is configured to capture an audio portion in the physical environment data, the camera device is configured to capture an image portion in the physical environment data, and then the first client combines the audio portion and the image portion to generate video data, or the first client uses the audio portion as audio data to achieve capturing of the audio data. When the data processing system in fig. 2 is applied to law enforcement forensics, the first client may include a mobile phone, a tablet computer, a notebook computer or a wearable device. As shown in fig. 2, the data platform may comprise a distributed storage platform or a blockchain platform.
Based on each device in the data processing system shown in fig. 1 or fig. 2, an embodiment of the present specification provides a data processing method, in which a sound signal with a set frequency is added to collected physical environment data, so that when the physical environment data is used, a physical scene anchoring process can be implemented by verifying attribute information in the sound signal with the set frequency, where the attribute information is information used for describing a physical scene in the physical scene anchoring process, thereby avoiding counterfeiting, tampering, and replacement of the attribute information.
Fig. 3 is a flowchart of a data processing method according to an embodiment of the present disclosure, and fig. 4 is a schematic diagram of the data processing method according to the embodiment of the present disclosure. The data processing method shown in fig. 3 is executed on the first client in fig. 1 or fig. 2, and as shown in fig. 3 and fig. 4, the method includes:
step 101, playing a sound signal with a set frequency to a physical environment, wherein the sound signal includes attribute information.
102, collecting physical environment data in a physical environment, wherein the physical environment data comprises a sound signal with set frequency;
and 103, uploading the physical environment data to a data platform.
In this embodiment, setting the frequency may include setting a single frequency within the frequency band or setting a plurality of frequencies within the frequency band.
Preferably, the sound signal may include an ultrasonic signal, and the set frequency includes a single frequency in a frequency band of 15Khz or more or a plurality of frequencies in a frequency band of 15Khz or more. The sound signal is played at the set frequency which is greater than or equal to 15Khz, so that the sound signal can not be heard by human ears basically, and the interference of the sound signal on other data in the physical environment data is avoided; and the noise interference of the sound signal of the wave band is less, and the accuracy of the sound signal of the set frequency in the physical environment data is also ensured.
In this embodiment, before step 101, the method further includes: and generating a sound signal with a set frequency.
Fig. 5 is a flowchart of generating a sound signal in an embodiment of the present specification, and as shown in fig. 5, specifically, the step of generating a sound signal with a set frequency includes:
step 100a, the acquired attribute information is encrypted to generate encrypted information.
In this embodiment, the attribute information may include one or any combination of device identification information, time information, and geographic location information. As a preferred aspect, the attribute information may include device identification information, time information, and geographical location information.
The device identification information is a device ID, and the device ID is protected by device hardware and is a unique identifier of the device; the time information is a time stamp for generating the sound signal, and the time information can be synchronized with the network time; the geographical location information is the location of the first client, i.e. the location of the physical environment in which the first client is located. In this embodiment of the present description, the attribute information may be set according to a physical scene to which the data processing method is applied, for example: when the data processing method is applied to a physical scene of law enforcement evidence collection, the attribute information may include device identification information, time information and geographical location information, and when the data processing method is applied to other physical scenes, the attribute information may also be set to include other types of information, for example: user identity information, not to be enumerated here.
In this embodiment, the first client may obtain the attribute information in advance. Specifically, since the device identification information is identification information of the first client, the first client can directly obtain the device identification information of the first client; the first client records the current time when generating the first sound wave, and takes the current time as time information; the first client can acquire the geographical position information of the first client through a GPS positioning module or a Beidou positioning module arranged on the first client. Further, when the attribute information further includes user identity information, the user using the first client logs in the first client using the user identity information when using the first client segment, so that the first client obtains the user identity information, for example: the user identity information may include a user account, a certificate number, or a cell phone number.
In this embodiment of the present specification, a symmetric encryption algorithm or an asymmetric encryption algorithm may be used to encrypt the attribute information to generate encrypted information. For example: when the symmetric encryption algorithm is adopted to encrypt the attribute information, the attribute information can be encrypted through the encryption key to generate a sound signal; another example is: when the non-encryption algorithm is adopted to encrypt the attribute information, the public key can be used for encrypting the attribute information to generate the sound signal.
And step 100b, carrying out coding processing on the encrypted information according to a preset sound coding rule to generate a sound signal.
In the embodiment of the present specification, the sound encoding rule is an encoding rule for converting data information into sound waves, for example: the sound encoding rules may include sound wave amplitude encoding rules or sound wave frequency encoding rules.
For example: if the sound encoding rule includes a sound wave amplitude encoding rule, the step 100a specifically includes: the encrypted information is encoded as an amplitude value of the sound wave, and a first sound wave is generated from the amplitude value of the sound wave to encode the encrypted information as the first sound wave.
Another example is: if the sound encoding rule includes a sound frequency encoding rule, the step 100a specifically includes: the encrypted information is encoded into a frequency value of the sound wave, and a first sound wave is generated according to the frequency value of the sound wave so as to encode the encrypted information into the first sound wave.
In practical applications, the voice encoding rule includes, but is not limited to, the above listed voice encoding rules, and the voice encoding rule may also be other encoding rules capable of converting data information into sound waves, which are not listed here.
And step 100c, setting the frequency of the sound signal as the set frequency.
As a preferable scheme, if the sound encoding rule includes a sound wave amplitude encoding rule, the frequency of the sound signal may be set to a single frequency within a set frequency band, for example: 15 Khz.
As another preferable scheme, if the sound encoding rule includes a sound wave frequency encoding rule, the frequency of the sound signal may be set to a plurality of frequencies within a set frequency band, for example: a plurality of frequencies between 20Khz and 21 Khz. In this case, for example: the frequency of the sound signal may be set to the frequency value of the sound wave generated at the time of encoding, or may be set to a frequency different from the frequency value of the sound wave generated at the time of encoding.
In the embodiments of the present specification, the physical environment data may include video data or audio data. As shown in fig. 4, when the first client plays the sound signal into the physical environment, when the first client collects the physical environment data, the sound signal as a part of the physical environment data is also collected by the first client.
In this embodiment, the first client may upload the physical environment data to the data platform, so that the data platform stores the physical environment data. Preferably, the data platform comprises a blockchain platform. In the embodiment of the description, before the physical environment data is uploaded to the block chain platform, the attribute information is recorded in the physical environment data in the form of a sound signal, so that the authenticity of the attribute information before uploading is ensured, and the advantages of reality, completeness and non-falsification of the data on the chain can be ensured by combining the block chain technology, so that a safe and reliable closed loop is formed in the anchoring process of a physical scene. In particular, the integrity of the physical environment data is guaranteed.
In the embodiment of the specification, the data processing method can be applied to law enforcement and evidence collection, for example, law enforcement personnel enforce the law of illegal parking at a railway station and shoot law enforcement and evidence collection videos in the law enforcement process. The law enforcement officer holds the position where the first client arrives at the railway station to illegally park the vehicle; in the process of law enforcement of illegal parking, a first client plays sound signals with set frequency to a physical environment through a loudspeaker of the first client and collects video data, wherein the video data comprise the sound signals with the set frequency, and the sound signals comprise equipment identification information, time information and geographical position information, for example: the device identification information is a device identification code, and the time information is 14: 00, the geographic position information is a GPS coordinate; and then, the first client uploads the video data to a data platform so that the data platform can store the video data.
In the technical solution of the data processing method provided in the embodiment of the present specification, the sound signal with the set frequency is played in the physical environment, so that the acquired physical environment data includes the sound signal with the set frequency, the sound signal includes the attribute information, and the attribute information is recorded in the physical environment data in the form of the sound signal, thereby preventing the attribute information from being forged, tampered and replaced in the anchoring process of the physical scene. In the embodiment of the specification, the attribute information is prevented from being forged, tampered and replaced in the anchoring process of the physical scene, so that the authenticity of the physical environment data is further ensured. The attribute information in the embodiment of the present specification is encrypted, so that while the security of information transmission is ensured, the attribute information in the sound signal is further prevented from being forged, tampered and replaced.
Fig. 6 is a flowchart of another data processing method provided in an embodiment of the present specification, and fig. 7 is another schematic diagram of the data processing method in the embodiment of the present specification. The data processing method shown in fig. 6 is executed in the second client in fig. 1 or the first client in fig. 2, and as shown in fig. 6 and fig. 7, the method includes:
step 201, physical environment data is acquired from a data platform.
Step 202, extracting a sound signal with a set frequency from the physical environment data, wherein the sound signal includes attribute information.
Step 203, extracting attribute information from the sound signal.
And 204, comparing whether the extracted attribute information is consistent with the attribute information acquired in advance, and generating a verification result.
In this embodiment, step 201 may specifically include: sending a data acquisition request to a data platform; and receiving the physical environment data returned by the data platform according to the data acquisition request.
In this embodiment, the physical environment data may include audio signals with different frequencies, and in step 202, the sound signal with a set frequency may be extracted from the audio signals with different frequencies in the physical environment data.
Fig. 8 is a flowchart of extracting attribute information in an embodiment of this specification, and as shown in fig. 8, step 203 specifically includes:
step 2031, decoding the audio signal according to a preset audio decoding rule to generate encrypted information.
In the embodiment of the present specification, when the sound encoding rule includes a sound wave amplitude encoding rule, the corresponding sound decoding rule includes a sound wave amplitude decoding rule; when the sound encoding rule includes a sound wave frequency decoding rule, the corresponding sound decoding rule includes a sound frequency decoding rule.
For example: if the sound decoding rule includes a sound wave amplitude decoding rule, step 2032 specifically includes: amplitude values of the sound signal are decoded into encrypted information to decode the sound signal into the encrypted information.
Another example is: if the sound decoding rule includes a sound wave frequency decoding rule, step 2032 specifically includes: the frequency value of the sound signal is decoded into encrypted information to decode the sound signal into the encrypted information.
Step 2032, the encrypted information is decrypted to generate the extracted attribute information.
In the embodiment of the present specification, if a symmetric encryption algorithm is adopted, the attribute information may be generated by decrypting the encrypted information through the encryption key; if an asymmetric encryption algorithm is adopted, the attribute information can be generated by decrypting the encrypted information through a private key.
In this embodiment, if the data platform includes a blockchain platform, in the blockchain network, all the blockchain link points in the blockchain network may acquire the attribute information in a node consensus, synchronization, or broadcast manner, that is, the first client or the second client in the blockchain network may acquire the attribute information. Because the first client is the client that initially obtains the attribute information, if the first client locally stores the attribute information, the first client can also directly obtain the attribute information from the locally stored data. In an embodiment of this specification, if the data platform includes a distributed storage platform, and the distributed storage platform stores attribute information uploaded by a user in advance, the first client or the second client may obtain the attribute information from the distributed storage platform. Or, the user inputs the attribute information at the first client or the second client to enable the first client or the second client to obtain the attribute information.
In this embodiment of the present specification, in step 204, it is compared whether the attribute information extracted in step 203 is consistent with the attribute information acquired in advance; if the verification result is consistent with the verification result, generating a verification success result; and if not, generating a verification failure result.
Specifically, if the attribute information includes device identification information, time information, and geographic location information, step 204 specifically includes: comparing whether the extracted equipment identification information is consistent with the pre-acquired equipment identification information, whether the extracted time information is consistent with the pre-acquired time information, and whether the extracted geographic position information is consistent with the pre-acquired geographic position information, if the extracted equipment identification information is consistent with the pre-acquired equipment identification information, the extracted time information is consistent with the pre-acquired time information, and the extracted time information is consistent with the pre-acquired time information, indicating that the verification is successful, and generating a verification successful result; if one or any combination of the extracted equipment identification information and the pre-acquired equipment identification information is inconsistent, the extracted time information and the pre-acquired time information is inconsistent, and the extracted time information and the pre-acquired time information is inconsistent, the verification is failed, and a verification failure result is generated.
In this embodiment of the present specification, when video data in a law enforcement evidence obtaining process needs to be used subsequently, the first client or the second client may obtain the video data from the data platform, extract a sound signal with a set frequency from the video data, extract device identification information, time information, and geographic location information from the sound signal, compare whether the extracted device identification information is consistent with the pre-obtained device identification information, whether the extracted time information is consistent with the pre-obtained time information, and whether the extracted geographic location information is consistent with the pre-obtained geographic location information, or is uniform, and then verify successfully, and generate a verification success result. If the verification is successful, the first client adopted in the law enforcement process is a legal device, the time information when the video data is collected is correct time, and the geographic position information when the video data is collected is correct place, and the first client is not forged, tampered and replaced.
In the technical solution of the data processing method provided in the embodiment of the present specification, a sound signal with a set frequency is extracted from physical environment data, attribute information is extracted from the sound signal, a verification result is generated by comparing whether the extracted attribute information is consistent with attribute information acquired in advance, the attribute information is recorded in the physical environment data in the form of the sound signal, and a physical scene anchoring process is completed through the attribute information extracted from the sound signal, so that the attribute information is prevented from being forged, tampered and replaced in the physical scene anchoring process. In the embodiment of the specification, the attribute information is prevented from being forged, tampered and replaced in the anchoring process of the physical scene, so that the authenticity of the physical environment data is further ensured. The attribute information in the embodiment of the present specification is encrypted, so that while the security of information transmission is ensured, the attribute information in the sound signal is further prevented from being forged, tampered and replaced.
Fig. 9 is a schematic structural diagram of a data processing apparatus provided in an embodiment of the present specification, where the data processing apparatus in fig. 9 is applicable to a first client, and as shown in fig. 9, the apparatus includes: the device comprises a playing module 11, a collecting module 12 and a sending module 13.
The playing module 11 is configured to play a sound signal with a set frequency into a physical environment, where the sound signal includes attribute information;
the acquisition module 12 is configured to acquire physical environment data in the physical environment, where the physical environment data includes a sound signal of the set frequency;
the sending module 13 is configured to upload the physical environment data to a data platform.
Further, the data processing apparatus further includes: an encryption module 14, an encoding module 15 and a setting module 16.
The encryption module 14 is configured to encrypt the acquired attribute information to generate encrypted information.
The encoding module 15 is configured to perform encoding processing on the encrypted information according to a preset sound encoding rule to generate the sound signal.
The setting module 16 is configured to set the frequency of the sound signal to the set frequency.
In the technical solution of the data processing apparatus provided in the embodiment of the present specification, the sound signal with the set frequency is played in the physical environment, so that the collected physical environment data includes the sound signal with the set frequency, the sound signal includes the attribute information, and the attribute information is recorded in the physical environment data in the form of the sound signal, thereby preventing the attribute information from being forged, tampered and replaced in the anchoring process of the physical scene.
Fig. 10 is a schematic structural diagram of another data processing apparatus provided in an embodiment of this specification, where the data processing apparatus in fig. 10 is applicable to a first client or a second client, as shown in fig. 10, the apparatus includes: the device comprises an acquisition module 21, a first extraction module 22, a second extraction module 23 and a comparison module 24.
The obtaining module 21 is configured to obtain the physical environment data from the data platform.
The first extraction module 22 is configured to extract a sound signal of the set frequency from the physical environment data, where the sound signal includes attribute information.
The second extraction module 23 is further configured to extract the attribute information from the sound signal.
The comparison module 24 is configured to compare whether the extracted attribute information is consistent with the attribute information acquired in advance, and generate a verification result.
Further, the second extraction module 23 includes: a decoding sub-module 231 and a decryption sub-module 232.
The decoding submodule 231 is configured to perform decoding processing on the audio signal according to a preset audio decoding rule, and generate encrypted information.
The decryption submodule 232 is configured to decrypt the encrypted information to generate extracted attribute information.
In the technical solution of the data processing apparatus provided in the embodiment of the present specification, a sound signal with a set frequency is extracted from physical environment data, attribute information is extracted from the sound signal, a verification result is generated by comparing whether the extracted attribute information is consistent with attribute information acquired in advance, the attribute information is recorded in the physical environment data in the form of the sound signal, and a physical scene anchoring process is completed through the attribute information extracted from the sound signal, so that the attribute information is prevented from being forged, tampered, and replaced in the physical scene anchoring process.
Embodiments of the present specification provide a storage medium, where the storage medium includes a stored program, where, when the program runs, a device on which the storage medium is located is controlled to execute each step of the above embodiments of the data processing method, and specific descriptions may refer to each embodiment of the above data processing method.
Embodiments of the present specification provide a client, including a memory and a processor, where the memory is used to store information including program instructions, and the processor is used to control execution of the program instructions, and the program instructions are loaded by the processor and executed to implement the steps of the embodiments of the data processing method, and specific descriptions may refer to the embodiments of the data processing method. In this specification, the client may include the first client or the second client.
Fig. 11 is a schematic diagram of a client provided in an embodiment of the present specification. As shown in fig. 11, the client 30 of this embodiment includes: the processor 31, the memory 32, and the computer program 33 stored in the memory 32 and capable of running on the processor 31, where the computer program 33 is executed by the processor 31 to implement the data processing method in the embodiment, and in order to avoid repetition, details are not repeated herein. Alternatively, the computer program is executed by the processor 31 to implement the functions of each model/unit in the data processing apparatus in the embodiments, which are not described herein for avoiding redundancy.
Client 30 may include, but is not limited to, a processor 31, a memory 32. Those skilled in the art will appreciate that fig. 11 is merely an example of a client 30 and does not constitute a limitation on client 30, and may include more or fewer components than shown, or combine certain components, or different components, e.g., client 30 may also include input-output devices, network access devices, buses, etc.
The Processor 31 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 32 may be an internal storage unit of the client 30, such as a hard disk or a memory of the client 30. The memory 32 may also be an external storage device of the client 30, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the client 30. Further, memory 32 may also include both internal storage units of client 30 and external storage devices. The memory 32 is used for storing computer programs and other programs and data required by the computer device. The memory 32 may also be used to temporarily store data that has been output or is to be output.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present specification, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present specification may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (21)

1. A method of data processing, comprising:
playing a sound signal with a set frequency to a physical environment, wherein the sound signal comprises attribute information, the attribute information is information used for describing a physical scene in the anchoring process of the physical scene, and the set frequency comprises a single frequency in a set frequency band or a plurality of frequencies in the set frequency band;
collecting physical environment data in the physical environment, wherein the physical environment data comprises sound signals with the set frequency;
and uploading the physical environment data to a data platform.
2. The data processing method of claim 1, the sound signal comprising an ultrasonic signal.
3. The data processing method of claim 1, wherein before playing the sound signal with the set frequency into the physical environment, the method further comprises:
encrypting the acquired attribute information to generate encrypted information;
coding the encrypted information according to a preset sound coding rule to generate the sound signal;
setting the frequency of the sound signal to the set frequency.
4. The data processing method of claim 3, wherein the attribute information comprises one or any combination of device identification information, time information, and geographic location information.
5. The data processing method of any of claims 1 to 4, the physical environment data comprising video data or audio data.
6. The data processing method of any of claims 1 to 4, the data platform comprising a distributed storage platform or a blockchain platform.
7. A method of data processing, comprising:
acquiring physical environment data from a data platform;
extracting a sound signal with a set frequency from the physical environment data, wherein the sound signal comprises attribute information, the attribute information is information used for describing a physical scene in a physical scene anchoring process, and the set frequency comprises a single frequency in a set frequency band or a plurality of frequencies in the set frequency band;
extracting the attribute information from the sound signal;
and comparing whether the extracted attribute information is consistent with the attribute information acquired in advance, and generating a verification result.
8. The data processing method of claim 7, the sound signal comprising an ultrasonic signal.
9. The data processing method of claim 7, the extracting the attribute information from the sound signal comprising:
decoding the sound signal according to a preset sound decoding rule to generate encrypted information;
and decrypting the encrypted information to generate the extracted attribute information.
10. The data processing method of claim 7, wherein the attribute information comprises one or any combination of device identification information, time information, and geographical location information.
11. The data processing method of any of claims 7 to 10, the physical environment data comprising video data or audio data.
12. The data processing method of any of claims 7 to 10, the data platform comprising a distributed storage platform or a blockchain platform.
13. A method of data processing, comprising:
the method comprises the steps that a first client plays a sound signal with set frequency to a physical environment, wherein the sound signal comprises attribute information, the attribute information is information used for describing a physical scene in the anchoring process of the physical scene, and the set frequency comprises a single frequency in a set frequency band or a plurality of frequencies in the set frequency band; collecting physical environment data in the physical environment, wherein the physical environment data comprises sound signals with the set frequency; uploading the physical environment data to a data platform;
the method comprises the steps that a first client or a second client obtains physical environment data from a data platform; extracting a sound signal of the set frequency from the physical environment data; extracting the attribute information from the sound signal; and comparing whether the extracted attribute information is consistent with the attribute information acquired in advance, and generating a verification result.
14. A data processing apparatus comprising:
the system comprises a playing module, a processing module and a processing module, wherein the playing module is used for playing a sound signal with a set frequency to a physical environment, the sound signal comprises attribute information, the attribute information is information used for describing a physical scene in the anchoring process of the physical scene, and the set frequency comprises a single frequency in a set frequency band or a plurality of frequencies in the set frequency band;
the acquisition module is used for acquiring physical environment data in the physical environment, wherein the physical environment data comprises a sound signal with the set frequency;
and the sending module is used for uploading the physical environment data to a data platform.
15. The data processing apparatus of claim 14, further comprising:
the encryption module is used for encrypting the acquired attribute information to generate encrypted information;
the coding module is used for coding the encrypted information according to a preset sound coding rule to generate the sound signal;
and the setting module is used for setting the frequency of the sound signal to be the set frequency.
16. A data processing apparatus comprising:
the acquisition module is used for acquiring physical environment data from the data platform;
a first extraction module, configured to extract a sound signal with a set frequency from the physical environment data, where the sound signal includes attribute information, and the attribute information is information used to describe a physical scene in a physical scene anchoring process, and the set frequency includes a single frequency in a set frequency band or multiple frequencies in the set frequency band;
the second extraction module is further used for extracting the attribute information from the sound signal;
and the comparison module is used for comparing whether the extracted attribute information is consistent with the attribute information acquired in advance or not and generating a verification result.
17. The data processing apparatus of claim 16, the second extraction module comprising:
the decoding submodule is used for decoding the sound signal according to a preset sound decoding rule to generate encrypted information;
and the decryption submodule is used for decrypting the encrypted information to generate the extracted attribute information.
18. A storage medium comprising a stored program, wherein a device on which the storage medium is located is controlled to perform the data processing method of any one of claims 1 to 6 when the program is run.
19. A storage medium comprising a stored program, wherein the apparatus on which the storage medium is located is controlled to perform the data processing method of any one of claims 7 to 12 when the program is run.
20. A client comprising a memory for storing information including program instructions and a processor for controlling the execution of the program instructions, which program instructions are loaded and executed by the processor to implement the steps of the data processing method of any one of claims 1 to 6.
21. A client comprising a memory for storing information including program instructions and a processor for controlling the execution of the program instructions, which program instructions, when loaded and executed by the processor, implement the steps of the data processing method of any of claims 7 to 12.
CN201910932085.XA 2019-09-29 2019-09-29 Data processing method, device, storage medium and client Active CN110661805B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910932085.XA CN110661805B (en) 2019-09-29 2019-09-29 Data processing method, device, storage medium and client

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910932085.XA CN110661805B (en) 2019-09-29 2019-09-29 Data processing method, device, storage medium and client

Publications (2)

Publication Number Publication Date
CN110661805A CN110661805A (en) 2020-01-07
CN110661805B true CN110661805B (en) 2022-05-24

Family

ID=69039748

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910932085.XA Active CN110661805B (en) 2019-09-29 2019-09-29 Data processing method, device, storage medium and client

Country Status (1)

Country Link
CN (1) CN110661805B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114003874A (en) * 2020-02-26 2022-02-01 支付宝(杭州)信息技术有限公司 Authenticity verification method, device and equipment for image data
CN112822219A (en) * 2021-03-01 2021-05-18 浙江大华技术股份有限公司 Multimedia data verification method, device, system and storage medium
CN115499196A (en) * 2022-09-14 2022-12-20 蚂蚁区块链科技(上海)有限公司 Scene forensics method, scene verification method, key transmitting device and server

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013001466A1 (en) * 2011-06-27 2013-01-03 Silaoglu Anil Sarper An evaluation system and method
CN103490827A (en) * 2012-06-12 2014-01-01 上海君戈网络科技有限公司 Message switching system and message switching method based on ultrasonic waves
CN105979053A (en) * 2016-04-18 2016-09-28 广东欧珀移动通信有限公司 Method and device for detecting microphone in mobile terminal and mobile terminal
CN107249009A (en) * 2017-08-02 2017-10-13 广东工业大学 A kind of data verification method and system based on block chain
CN109191187A (en) * 2018-08-17 2019-01-11 浙江口碑网络技术有限公司 The acquisition methods and device of shop pushed information
CN110022355A (en) * 2019-03-04 2019-07-16 阿里巴巴集团控股有限公司 The storage method of environmental data, verification method and apparatus under special scenes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013001466A1 (en) * 2011-06-27 2013-01-03 Silaoglu Anil Sarper An evaluation system and method
CN103490827A (en) * 2012-06-12 2014-01-01 上海君戈网络科技有限公司 Message switching system and message switching method based on ultrasonic waves
CN105979053A (en) * 2016-04-18 2016-09-28 广东欧珀移动通信有限公司 Method and device for detecting microphone in mobile terminal and mobile terminal
CN107249009A (en) * 2017-08-02 2017-10-13 广东工业大学 A kind of data verification method and system based on block chain
CN109191187A (en) * 2018-08-17 2019-01-11 浙江口碑网络技术有限公司 The acquisition methods and device of shop pushed information
CN110022355A (en) * 2019-03-04 2019-07-16 阿里巴巴集团控股有限公司 The storage method of environmental data, verification method and apparatus under special scenes

Also Published As

Publication number Publication date
CN110661805A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
US11817130B2 (en) Forensic video recording with presence detection
CN110661805B (en) Data processing method, device, storage medium and client
US10176309B2 (en) Systems and methods for authenticating video using watermarks
US7792296B2 (en) Access-controlled encrypted recording method for site, interaction and process monitoring
US20180249190A1 (en) Method and apparatus for cloud storage and cloud download of multimedia data
JP4740157B2 (en) Protect digital data content
WO2021169530A1 (en) Method, apparatus and device for verifying authenticity of image data
CN101110982A (en) Method for acquiring evidence using wireless terminal and server
CN101296148B (en) Verification method, system and device for validity of multimedia contents
CN103391196A (en) Asset digital authentication method and device
CN108881966B (en) Information processing method and related equipment
CN115964730A (en) Block chain information storage method based on alliance chain multi-chain
CN116015627A (en) Face recognition tamper-proof method and system based on symmetric algorithm
CN113704707A (en) Block chain-based audio tamper-proof method and device
CN111600879B (en) Data output/acquisition method and device and electronic equipment
CN110380843B (en) Information processing method and related equipment
US11599570B2 (en) Device and method to render multimedia data stream tamper-proof based on block chain recording
CN110381342B (en) Information processing method and related equipment
CN108769748B (en) Information processing method and related equipment
CN111988624A (en) Video processing method, device, equipment and storage medium
Beser et al. Authentication of digital video evidence
CN115086730B (en) Subscription video generation method, subscription video generation system, computer equipment and subscription video generation medium
US20070172055A1 (en) Apparatus and method for distorting digital contents and recovering the distorted contents
CN115834035A (en) Multimedia data storage method, computer equipment and storage device
JP7249752B2 (en) Content upload device, content verification device and their programs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240922

Address after: Room 302, 3rd Floor, Building 1, Yard 1, Danling Street, Haidian District, Beijing, 100080

Patentee after: Sasi Digital Technology (Beijing) Co.,Ltd.

Country or region after: China

Address before: 801-11, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province 310007

Patentee before: Alipay (Hangzhou) Information Technology Co.,Ltd.

Country or region before: China