CN110942021B - Environment monitoring method, device, equipment and storage medium - Google Patents

Environment monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN110942021B
CN110942021B CN201911165967.4A CN201911165967A CN110942021B CN 110942021 B CN110942021 B CN 110942021B CN 201911165967 A CN201911165967 A CN 201911165967A CN 110942021 B CN110942021 B CN 110942021B
Authority
CN
China
Prior art keywords
image
hash value
feature
determining
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911165967.4A
Other languages
Chinese (zh)
Other versions
CN110942021A (en
Inventor
蔡弋戈
秦青
杨晨
王乐庆
李琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911165967.4A priority Critical patent/CN110942021B/en
Publication of CN110942021A publication Critical patent/CN110942021A/en
Application granted granted Critical
Publication of CN110942021B publication Critical patent/CN110942021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an environment monitoring method, device, equipment and storage medium, which are applied to a blockchain network. The method comprises the following steps: receiving a first image submitted by environment monitoring equipment and a comparison image identifier, and obtaining fixed target object information of the first image; acquiring image information of a second image based on the comparison image identifier; transmitting the fixed object information and the image information to a consensus node; obtaining a hash value of a second characteristic image of the second image from the target block; determining a hash value of a first feature image of a first image; and determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than a preset similarity. By adopting the embodiment of the application, the environmental change can be detected, and the applicability is high.

Description

Environment monitoring method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an environment monitoring method, apparatus, device, and storage medium.
Background
Along with the continuous development of the modern process in China, the living standard of people is continuously improved, on one hand, the natural environment is continuously destroyed due to the industrialized development, and on the other hand, people continuously strive to greatly improve part of the natural environment.
The natural environment is one of the hot topics nowadays, and environmental protection departments, enterprises, schools, tourist attractions and common people of all levels of governments are very urgent to know about environmental changes. However, the change of the natural environment often requires the environment monitoring personnel of the environmental department to monitor manually, so that the time period is long, and manpower and material resources are wasted. Therefore, how to quickly and accurately detect the natural environment to determine whether the natural environment changes is a problem to be solved.
Disclosure of Invention
The embodiment of the application provides an environment monitoring method, an environment monitoring device, environment monitoring equipment and a storage medium, which can detect environment changes and are high in applicability.
In a first aspect, an embodiment of the present application provides an environmental monitoring method, including:
receiving a first image submitted by environment monitoring equipment and a comparison image identifier, and acquiring fixed object information of the first image, wherein the comparison image identifier is used for marking a second image which is compared with the first image, and the fixed object information comprises contour information, position information and a fixed object type;
Acquiring image information of the second image from a target block of the blockchain network based on the comparison image identifier;
transmitting the fixed object information and the image information to a consensus node so that the consensus node verifies whether the first image and the second image are images of the same environmental scene and transmits a signature confirmation message after the verification is passed;
receiving the signature confirmation message and acquiring a hash value of a first characteristic image of the second image from the target block when the signature confirmation message meets a preset consensus strategy;
determining a hash value of a second feature image of the first image;
and determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than a preset similarity.
In a second aspect, embodiments of the present application provide an environmental monitoring apparatus, the apparatus including:
the receiving module is used for receiving a first image submitted by the environment monitoring equipment and a comparison image identifier, and acquiring fixed object information of the first image, wherein the comparison image identifier is used for marking a second image which is compared with the first image, and the fixed object information comprises contour information, position information and a fixed object type;
The first acquisition module is used for acquiring the image information of the second image from the target block of the block chain network based on the comparison image identification;
the sending module is used for sending the fixed object information and the image information to a consensus node so that the consensus node verifies whether the first image and the second image are images of the same environment scene or not and sends a signature confirmation message after verification is passed;
the second acquisition module is used for receiving the signature confirmation message and acquiring a hash value of the first characteristic image of the second image from the target block when the signature confirmation message meets a preset consensus strategy;
a first determining module, configured to determine a hash value of a second feature image of the first image;
and the second determining module is used for determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than a preset similarity.
With reference to the second aspect, in one possible implementation manner, the hash value of the first feature image is an average hash value; the first determining module includes:
The first processing unit is used for scaling the second characteristic image to obtain a first scaled characteristic image with the size of 8 x 8;
the second processing unit is used for converting the first zooming characteristic image into a 256-level first gray characteristic image and determining a first average value of gray values of all pixel points of the first gray characteristic image;
a first determining unit, configured to compare each gray value in the first gray feature image with the first average value to determine a hash value of a second feature image of the first image, where the hash value of the second feature image includes 64 bits, one bit corresponds to one gray value of the first gray feature image, and a bit corresponding to a case where any gray value of the first gray feature image is greater than the first average value is recorded as 1, otherwise, is recorded as 0.
With reference to the second aspect, in one possible implementation manner, the hash value of the first feature image is a perceptual hash value; the first determining module includes:
the third processing unit is used for scaling the second characteristic image to obtain a second scaled characteristic image with the size of 32 x 32, and converting the second scaled characteristic image into a second gray characteristic image with 256 steps;
A fourth processing unit, configured to perform discrete cosine transform on the second gray feature image to obtain a discrete feature image with a size of 32×32, and determine a third scaled feature image with a size of 8×8 from an upper left corner of the discrete feature image;
and a second determining unit, configured to determine a second average value of frequency domain values of all pixels of the third scaled feature image, and compare each frequency domain value of the third scaled feature image with the second average value to determine a hash value of the second feature image of the first image, where the hash value of the second feature image includes 64 bits, one bit corresponds to one frequency domain value of the third scaled feature image, and a bit record when any frequency domain value of the third scaled feature image is greater than the second average value is 1, and otherwise, is 0.
With reference to the second aspect, in one possible implementation manner, the second determining module includes:
a third determining unit configured to determine the number of data bits that are different from the hash value of the second feature image and the hash value of the first feature image;
and a fourth determining unit configured to determine a ratio of the number of the different data bits to the number of bits of the hash value of the first feature image or the hash value of the second feature image, and determine a similarity between the first image and the second image based on the ratio.
With reference to the second aspect, in one possible implementation manner, the hash value of the first feature image is a fuzzy hash value; the first determining module includes:
a fifth processing unit, configured to sort each pixel point of the second feature image based on a preset arrangement sequence, to obtain a pixel point sequence;
a sixth processing unit, configured to divide the pixel sequence to obtain a plurality of pixel sub-sequences, where a sum of data lengths of the plurality of pixel sub-sequences is not smaller than a data length of the pixel sequence;
and the connection unit is used for respectively determining the hash value of each pixel sub-sequence, and connecting the hash values of each pixel sub-sequence to obtain the hash value of the second characteristic image of the first image.
With reference to the second aspect, in one possible implementation manner, the second determining module includes:
a fifth determining unit configured to determine a weighted editing distance from the hash value of the second feature image to the hash value of the first feature image;
a sixth determining unit configured to determine a total data length of a data length of the hash value of the first feature image and a data length of the hash value of the second feature image;
And a seventh determining unit, configured to determine a ratio of the weighted editing distance to the total data length, and perform numerical mapping on the ratio based on a preset mapping rule, so as to obtain a similarity between the first image and the second image.
With reference to the second aspect, in a possible implementation manner, the environmental monitoring device further submits a shooting date of the first image; the device further comprises:
the third acquisition module is further used for acquiring a time stamp in the block header of the target block and determining the storage date of the image information based on the time stamp;
and a comparison module for comparing a shooting date of the first image with a storage date of the image information, and if the shooting date is after the storage date and a date interval between the shooting date and the storage date is greater than a preset date interval, executing the step of transmitting the fixed object information and the image information to a consensus node.
In a third aspect, embodiments of the present application provide an apparatus comprising a processor and a memory, the processor and the memory being interconnected. The memory is configured to store a computer program supporting the terminal device to perform the method provided by the first aspect and/or any of the possible implementation manners of the first aspect, the computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method provided by the first aspect and/or any of the possible implementation manners of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program for execution by a processor to implement the method provided by the first aspect and/or any one of the possible implementation manners of the first aspect.
In the embodiment of the application, based on the fixed object information of the first image and the image information of the second image, whether the first image and the second image are images of the same environmental scene can be determined, so that the environmental change of the environmental scene is determined based on different images of the same environmental scene. Wherein determining whether the first image and the second image are images of the same environmental scene based on a common node in the blockchain network may improve accuracy of determining whether the first image and the second image are images of the same environmental scene. Further, the similarity of the first image and the second image is determined based on the hash value of the first characteristic image and the hash value of the second characteristic image, so that the environment change can be detected more accurately, and the applicability is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a network configuration diagram of an environment monitoring method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of an environmental monitoring method according to an embodiment of the present application;
FIG. 3 is a schematic view of a scenario for determining hash values of a feature image according to an embodiment of the present disclosure;
FIG. 4 is another schematic view of determining hash values of a feature image according to an embodiment of the present application;
FIG. 5 is a schematic view of another scenario for determining hash values of a feature image provided in an embodiment of the present application;
fig. 6 is a schematic view of a scenario for determining similarity according to an embodiment of the present application;
FIG. 7 is a schematic view of a scenario for determining weighted edit distances provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of an environmental monitoring device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a network structure diagram of an environment monitoring method according to an embodiment of the present application. As shown in fig. 1, the environment monitoring device 10 may take a photograph of the natural environment to obtain a first image of the natural environment and send the image to the blockchain network 20. The environmental monitoring device 10 includes, but is not limited to, an aerial camera (satellite, unmanned aerial vehicle, etc.), various types of video cameras, various types of cameras, etc., and may be specifically determined based on actual application scenarios, which are not limited herein. When the environmental monitoring device 10 submits the captured first image to the blockchain network 20, an alignment image identification may be submitted to the blockchain network 20, the alignment image identification being used to cause the blockchain network 20 to determine a second image from the blockchain network 20 based on the alignment image identification that is compared to the first image. Further, after the blockchain network 20 receives the first image submitted by the environmental monitoring device 10, the blockchain network may acquire the fixed object information in the first image and acquire the image information of the second image from the target block, so that the fixed object information of the first image and the image information of the second image may be sent to the consensus node, so that the consensus node verifies whether the first image and the second image are images of the same environmental scene and sends a signature confirmation message after the verification is passed. When the signature confirmation message meets a preset consensus strategy, the hash value of the second characteristic image of the first image and the hash value of the first characteristic image of the second image can be determined, so that the similarity of the first image and the second image is determined based on the hash value of the first characteristic image and the hash value of the second characteristic image, and an environment change notification is sent to an environment monitoring department when the similarity of the first image and the second image is smaller than the preset similarity. The first characteristic image and the second characteristic image are respectively used for representing the environmental characteristics of the first image and the environmental characteristics of the second image.
Referring to fig. 2, fig. 2 is a flow chart of an environmental monitoring method according to an embodiment of the present application. The flow chart of the environmental monitoring method provided in the embodiment of the present application may include the following steps S101 to S106.
S101, receiving a first image submitted by the environment monitoring equipment and comparing image identifications, and acquiring fixed object information of the first image.
In some possible embodiments, upon receiving a first image submitted by an environmental monitoring device, fixed object information of the first image may be acquired, which may be used to compare with other images to determine whether the two images are of the same environmental scene (e.g., an environmental image of the same location). The fixed object is an environment object which can represent the image characteristics of the first image and is not easy to change in a long time, and the fixed object includes, but is not limited to, stones, mountains, buildings, rivers, etc., and can be specifically determined based on the actual image, and is not limited herein. The fixed object information may include contour information, position information, and a fixed object type (peak, stone, building, etc.), and the position information may represent a geographic position of the fixed object and a position of the fixed object in the environmental scene of the first image. For example, when a peak is included in the first image, the peak is substantially at a fixed position, so that the peak may be a fixed target object of the first image, the peak contour of the peak may be contour information of the fixed target object, the position of the peak in the first image may be position information of the fixed target object, and the peak may be a fixed target object type. Based on the fixed object information of the first image, it may be determined whether the environmental scene of any other image and the environmental scene of the first object are the same environmental scene.
In some possible embodiments, when receiving the first image submitted by the environment monitoring device, the comparison image identifier submitted by the environment monitoring device may also be received together, where the comparison image identifier is used to mark the second image that is compared with the first image. That is, when the environmental monitoring device rates the first image, the comparison image identification is also submitted to claim that the first image will be compared to the second image marked by the comparison image identification, thereby determining the similarity of the first image and the second image to determine the environmental change of the environmental scene in the first image. The comparison image identifier may be any image description information for marking the second image and characters, numbers and the like associated with the second image, which may be specifically determined based on an actual application scenario, and is not limited herein.
S102, acquiring image information of a second image from a target block of the blockchain network based on the comparison image identification.
In some possible embodiments, after receiving the comparison image identifier submitted by the environmental monitoring device, a target block with an identifier consistent with the comparison image identifier may be determined from each block of the blockchain network, and the target block may be parsed to obtain image information of the second image from a block of the target block. Optionally, each block in the blockchain network may be traversed, and the block header of each block may be parsed to determine a hash value of a root node storing a hash value of image information of the second image, so as to use a block in which the hash value of the root node is located as a target block to obtain the image information of the second image from a block of the target block. The image information of the second image is information for describing an environmental scene of the second image, including but not limited to an environmental object, an environmental object type, a contour, a position, and the like, and may be specifically determined based on an actual application scene, which is not limited herein.
And S103, transmitting the fixed object information and the image information to the consensus node so that the consensus node verifies whether the first image and the second image are images of the same environment scene and transmits a signature confirmation message after verification is passed.
In some possible embodiments, after the fixed object information of the first image and the image information of the second image are acquired, the fixed object information of the first image and the image information of the second image may be sent to the consensus node, so that the consensus node verifies whether the first image and the second image are images of the same environmental scene. Specifically, each consensus node can match the fixed target information of the first image with the image information of the second image, and if the outline and the position of the fixed target object in the first image are identical based on the environment object with identical fixed target object types in the image information of the second image, the consensus node can consider that the environment scene of the first image and the environment scene of the second image are the same environment scene. Further, at this point the consensus node may generate and return a signature confirmation message. Alternatively, the signature verification message may be any form of information for informing that the first image and the second image are images of the same environmental scene, and the specific information content is not limited herein. And the signature confirmation message generated by each consensus node can represent the generation node of the signature confirmation message, so that the consensus node sending the signature confirmation message can be determined according to the signature confirmation message after the signature confirmation message of each consensus node is received. Optionally, after generating the signature confirmation message, each consensus node may generate a digest on the signature confirmation message through hash calculation, and encrypt the digest by using its private key to prevent the signature confirmation message from being tampered with.
In some possible embodiments, since the purpose of determining whether the first image and the second image are the same environmental scene based on the fixed object information of the first image and the image information of the second image is to determine whether the environment of the environmental scene in the first image and the second image is changed, if the photographing date of the first image and the storage date of the image information of the second image differ by a short period (e.g., several hours, several days), even if it is determined that the first image and the second image are images of the same environmental scene, the environmental scene may be defaulted to have not changed due to the environmental change and the minuteness thereof. On the other hand, since the environmental condition of the storage date of the environmental scene is already known when the image information of the second image is stored, if the shooting date of the first image is before the storage date of the second image, the environmental change of the environmental scene after the storage date cannot be described. Therefore, when the first image submitted by the environment monitoring device is received, the shooting date of the first image submitted by the environment monitoring device can also be received, and a time stamp is further obtained from the block head of the target block storing the image information of the second image, wherein the time stamp represents the creation time of the target block, and further can represent the storage time of the image information of the second image. Further, the photographing date of the first image and the storage date of the image information of the second image may be compared, and when the photographing date of the first image is after the storage date of the image information of the second image and a date interval between the photographing date of the first image and the storage date of the image information of the second image is greater than a preset date interval, the fixed target information of the first image and the image information of the second image may be transmitted to the consensus node so that the consensus node verifies whether the first image and the second image are the same environmental scene. The preset date interval may be any date interval, such as one year, ten years, 6 months, etc., and may be specifically determined based on the actual application scenario, which is not limited herein.
S104, receiving the signature confirmation message, and acquiring the hash value of the first characteristic image of the second image from the target block when the signature confirmation message meets a preset consensus strategy.
In some possible embodiments, signature verification messages sent by a plurality of consensus nodes in the blockchain network may be received, and when each received signature verification message meets a preset consensus policy, it may be determined that the first image and the second image are the same environmental scene. The preset consensus strategy may be that when a certain proportion of consensus nodes in all the consensus nodes in the blockchain network consider that the first image and the second image are the same environmental scene, the first image and the second image can be confirmed to be the same environmental scene, for example, after receiving a signature confirmation message sent by ninety-five percent of consensus nodes, the first image and the second image can be confirmed to be the same environmental scene. Or, the preset consensus strategy may also be that when receiving signature confirmation messages sent by a plurality of consensus nodes in the blockchain network, whether the consensus node sending the signature confirmation message belongs to the preset consensus node is determined based on the signature in the signature confirmation message, and/or after the consensus node sending the signature confirmation message is the preset consensus node and/or the consensus node sending the signature confirmation message exceeds all the consensus nodes in the blockchain network by a certain proportion, the first image and the second image are determined to be the same environmental scene. It should be specifically noted that the preset consensus policy may further include, but is not limited to, a Proof of Work (PoW), a Proof of equity (PoS), a share authorization (Delegated Proof of Stake, DPoS) and a practical barking mechanism (Practical Byzantine Fault Tolerance, PBFT), and a risple consensus algorithm, which may be specifically determined based on the actual application scenario, and is not limited herein. Further, when the signature confirmation message sent by the consensus node meets a preset consensus strategy, a hash value of a first feature image of the second image can be obtained from the target block, wherein the first feature image is a feature environment representing an environmental scene in the second image, namely, a representative image of the environmental scene. The specific method for acquiring the hash value of the first feature image of the second image from the target block may be determined based on the actual application scenario, which is not limited herein.
S105, determining a hash value of the second characteristic image of the first image.
In some possible embodiments, after the hash value of the first feature image of the second image is obtained, a calculation manner of the hash value of the first feature image may be determined, and the hash value of the second feature image of the first image is further determined by using the same calculation manner, where the second feature image is a feature environment representing an environmental scene in the first image, that is, a representative image of the environmental scene. Specifically, when the hash value of the first feature image is an average hash value, that is, the hash value of the first feature image is obtained based on an average hash algorithm, the second feature image may be scaled to obtain a first scaled feature image with a size of 8×8, and the first scaled feature image is converted into a 256-order first gray feature image. The Gray value corresponding to each pixel in the first scaling feature image may be obtained and converted into a Gray value, and specifically the Gray value corresponding to each pixel may be obtained based on any one of a floating point algorithm (gray=r×0.3+g×0.59+b×0.112), an integer method (gray= (r×30+g×59+b×11)/100), a shift method (gray= (r×76+g×151+b+28) > > 8), and an average method (gray= (r+g+b)/3), where Gray represents a Gray value, R represents a red color value, G represents a green color value, and B represents a blue color value. Optionally, the green value corresponding to each pixel point may be determined as the gray value corresponding to the pixel point, and the specific implementation manner may be determined based on the actual application scenario, which is not limited herein. After the gray values corresponding to the pixels are obtained, the gray values corresponding to the pixels can be correspondingly placed based on the positions of the pixels in the first image, so that the first scaled image is converted into a 256-order first gray feature image. Further, a first average value of all the similar points of the first gray feature image may be obtained, each gray value of the first gray feature image may be traversed to compare each gray value in the first gray feature image with the first average value, a gray value greater than the first average value may be recorded as 1, a gray value not greater than the first average value may be recorded as 0, and a data sequence of 64 bits, consisting of 0 and 1, may be obtained, and at this time, the data sequence may be determined as a hash value of the second feature image. Wherein the hash value of the second feature image contains 64 bits, each bit corresponding to a record (0 or 1) of the gray value. In particular, any of the above-described processes of determining the hash value of the second feature image of the first image is identical to the process of calculating the hash value of the first feature image of the second image, which is not described herein.
For example, referring to fig. 3, fig. 3 is a schematic view of a scenario for determining hash values of a feature image according to an embodiment of the present application. In fig. 3, it is assumed that the scaled feature image in fig. 3 is obtained after scaling the second feature image of the first image, and the gray feature image in fig. 3 is obtained after calculating the gray values corresponding to the respective pixels of the scaled feature image. The respective gray values in the gray feature image may be added to an average value of 6.35 at this time. Assuming that the order of the hash value of the first feature image of the second image from the first gray value of the first line to the last gray value of the last line is compared with the average value of the respective gray values of the gray features of the second feature image, the respective gray values in fig. 3 may be sequentially compared with the average value 6.35 in the above order, sequentially recorded as 0 whenever one gray value is not less than the average value 6.35, and sequentially recorded as 1 whenever one gray value is greater than the average value 6.35. As shown in the last line of gray values in fig. 3, after comparison with the average value of 6.35, a sequence of "10101000" can be obtained, and the hash value of the second feature image of the first image can be determined based on the obtained final 64-bit sequence in the above manner.
Optionally, in some possible embodiments, when the hash value of the first feature image is a perceptual hash value, that is, the hash value of the first feature image is obtained based on a perceptual hash algorithm, the second feature image may be scaled to obtain a second scaled feature image with a size of 32×32, and the second scaled feature image is converted into a second gray feature image with a 256-level, where a specific implementation manner of converting the second scaled feature image into the second gray feature image may be referred to the above implementation manner, which is not described herein. Further, at this time, the second gray scale feature image may be subjected to discrete cosine transform to obtain a discrete feature image with a size of 32×32, and a third scaled feature image with a size of 8×8 is determined from the upper left corner of the discrete feature image. At this time, a second average value of the frequency domain values of all the pixels of the third scaling feature image may be determined, each frequency domain value of the third scaling feature image may be traversed to compare each frequency domain value of the third scaling feature image with the second average value, a frequency domain value greater than the second average value may be recorded as 1, a frequency domain value not greater than the first average value may be recorded as 0, and a data sequence composed of 64 bits and 0 and 1 may be obtained, and at this time, the data sequence may be determined as a hash value of the second feature image. The hash value of the second feature image includes 64 bits, where each bit corresponds to a record (0 or 1) of a frequency domain value, which is specifically referred to the above implementation and is not described herein. In particular, any of the above-described processes of determining the hash value of the second feature image of the first image is identical to the process of calculating the hash value of the first feature image of the second image, which is not described herein.
For example, as shown in fig. 4, fig. 4 is another schematic view of determining hash values of a feature image according to an embodiment of the present application. As shown in fig. 4, the scaled feature image 1 is an image with a size of 32×32 obtained by scaling the second feature image, and the gray-scale feature image can be obtained by performing gray-scale value calculation on the scaled feature image 1. At this time, discrete cosine transform can be performed on the gray feature image to obtain a discrete feature image. In order to ensure that the accurate frequency domain value of the second feature image is obtained, the size of the obtained discrete feature image is 32×32, and 1024 pixels are included, 8 columns are horizontally taken from the first pixel point at the upper left corner of the discrete feature image, and 8 columns are longitudinally taken to obtain a scaled feature image 2 with the size of 8×8. Assuming that the order of the hash value of the first feature image of the second image from the first frequency domain value of the first line to the last frequency domain value of the last line is compared with the average value of the frequency domain values of the gray feature of the second feature image, the frequency domain values in fig. 4 may be sequentially compared with the average value of all the frequency domain values in the scaled feature image 3 in the above order, and each time one frequency domain value is not less than the average value, 0 is sequentially recorded, and each time one frequency domain value is greater than the average value, 1 is sequentially recorded, thereby determining the resulting 64-bit sequence consisting of 1 and 0 as the hash value of the second feature image of the first image.
Alternatively, in some possible embodiments, when the hash value of the first feature image is a perceived hash value, that is, the hash value of the first feature image is obtained based on a fuzzy hash algorithm, the pixels of the second feature image may be ordered based on a preset arrangement sequence (the preset arrangement sequence is consistent with the arrangement sequence of the pixels for determining the hash value of the first feature image) to obtain the pixel sequence. Further, the pixel point sequence can be segmented to obtain a plurality of pixel point sub-sequences, specifically, the pixel points with fixed length can be read byte by sliding from left to right in the pixel point sequence through a fixed-length slicing window, wherein the pixel point in each window is a pixel point sub-sequence. Based on the implementation manner, the total length of all the obtained pixel sub-sequences is not less than the length of the pixel sequences. Further, a hash value of each pixel sub-sequence may be determined and connected according to an arrangement order of each pixel sub-sequence in the pixel sequence to obtain a hash value sequence, and the hash value sequence is used as a hash value of the second feature image of the first image. The length of the window may be determined based on the actual application scenario, and is not limited herein, and the lengths of the sub-sequences of the pixels obtained based on the same fixed-length window are the same. It should be noted that, each link for determining the hash value of the second feature image is identical to each link for determining the hash value of the first feature image, which is not described herein.
For example, referring to fig. 5, fig. 5 is a schematic view of still another scenario for determining hash values of a feature image according to an embodiment of the present application. In fig. 5, it is assumed that the second feature image is 3*3 in size, at this time, each pixel in the second feature image may be sequentially ordered in the order from the first line to the last line to obtain a pixel sequence "472363266", and a segment of pixel "47236" with the same length as the 5-segment window used for determining the hash value of the first feature image is selected from the beginning of the pixel sequence "472363266" as the pixel sub-sequence 1. The numbers in the pixel point sequence can be corresponding pixels, gray values and the like, and the method is not limited herein. Further, the slicing window is extended backward by 1 length (or any length not greater than the slicing window) to select "72363" from the pixel sequence "472363266" as the pixel sub-sequence 2, and so on, until the end of the slicing window stops when the last pixel of the pixel sequence "472363266" is selected, the pixel sub-sequence 1"47236", the pixel sub-sequence 2"72363", the pixel sub-sequence 3"23632", the pixel sub-sequence 4"36326" and the pixel sub-sequence 5"63266" can be obtained, and the slicing process of the pixel sequence is completed at this time. The length of the slicing window and the length of the forward delay in slicing may be determined based on the actual application scenario, which is not limited herein. Further, hash values of the pixel sub-sequence 1, the pixel sub-sequence 2, the pixel sub-sequence 3, the pixel sub-sequence 4 and the pixel sub-sequence 5 are calculated respectively, and all the hash values are connected in sequence according to the pixel sub-sequence 1, the pixel sub-sequence 2, the pixel sub-sequence 3, the pixel sub-sequence 4 and the pixel sub-sequence 5 to obtain hash values of the second characteristic image. It should be noted that, the hash algorithm used to calculate the hash value of each sub-sequence of pixels may be determined based on the hash algorithm used to determine the hash value of the first feature image of the second image, which is not limited herein.
Alternatively, in some possible embodiments, when the hash value of the first feature image is a difference hash value, that is, the hash value of the first feature image is obtained based on a difference hash algorithm, the second feature image may be scaled to a fourth scaled feature image with a size of 9*8, and the fourth scaled feature image may be converted to a third gray feature image with 256 steps. At this time, the difference values of the adjacent gray values of each row are determined, and a difference value sequence including 64 difference values is obtained, wherein the sorting order of the 64 difference values can be determined based on the sorting order of the difference values when the hash values of the first feature image are determined, which is not limited herein. Further, a 64-bit sequence may be derived from the first difference value of the sequence of difference values, if the first difference value is higher than the second difference value, then recorded as 1, otherwise recorded as 0, and the sequence is determined as a hash value of the second feature image.
S106, determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than the preset similarity.
In some possible embodiments, when the hash value of the second feature image of the first image and the hash value of the first feature image of the second image are both one of the perceived hash value, the average hash value and the difference hash value, the hash value of the second feature image may be compared with the hash value of the first feature image to determine the number of different data bits in the two hash values. Further, a ratio of the number of the different data bits and the position of the hash value of the first feature image or the hash value of the second feature image is determined to determine the similarity of the first image and the second image based on the ratio. Specifically, the similarity between the first image and the second image may be determined based on the correspondence between the ratio and the preset similarity table, or the percentage corresponding to the difference between the ratio and 1 may be determined as the similarity between the first image and the second image. Referring to fig. 6, fig. 6 is a schematic view of a scene for determining similarity according to an embodiment of the present application. In fig. 6, it is assumed that the hash value of the second feature image and the hash value of the first feature image are as shown in fig. 6, and it is not difficult to compare them to obtain 12 pieces of data different from each other. Assuming that the hash value of the second feature image and the hash value of the first feature image are both 64 bits, it can be determined that the ratio of the number of different data bits to the hash value of the second feature image or the number of bits of the hash value of the first feature image is 18.75%, that is, the dissimilarity of the first image and the second image is 18.75%, in other words, the similarity of the first image and the second image is 18.75%.
In some possible embodiments, when the hash value of the second feature image of the first image and the hash value of the first feature image of the second image are both fuzzy hash values, a weighted editing distance from the hash value of the second feature image to the hash value of the first feature image may be further determined, and a length of the hash value of the first feature image and a length of the hash value of the second feature image may be determined, and a total length of the hash value of the first feature image and the length of the hash value of the second feature image may be further determined. The weighted editing distance is a weighted editing distance from the hash value of the second feature image to the hash value of the first feature image, which requires a minimum number of operations (including insertion, deletion, modification, exchange, etc.), and the different operations correspond to different weight values, where the sum of the weight values corresponding to all the operations is the weighted editing distance from the hash value of the second feature image to the hash value of the first feature image. For example, referring to fig. 7, fig. 7 is a schematic view of a scenario for determining a weighted editing distance according to an embodiment of the present application. As shown in fig. 7, the hash value of the second feature image is 41e95c2d9fb76019fb7601140e21d46baa, the hash value of the first feature image is 1e295c2d9fb76019fb7601140e21d46baa, and it is easy to see that the following editing steps are required from the hash value of the second feature image to the hash value of the first feature image: deleting 4 of the first bit in the hash value of the second feature image, adding 2 between e and 9, and changing the last a to 0. The hash value of the second feature image may be changed to the same hash value as the hash value of the first feature image through the above steps. Assuming that the weight value corresponding to the deleting operation is x, the weight value corresponding to the adding operation is y, and the weight value corresponding to the changing operation is z, the sum (x+y+z) of the three weight values is the final weighted editing distance.
Further, a ratio of the weighted edit distance to the total length of the hash value of the second feature image to the length of the hash value of the first feature image may be determined to change the absolute result to a relative result, and then the ratio is mapped to an integer value of 0-100 to obtain the similarity of the first feature image and the second feature image. Where 100 represents that the two hashes are completely identical and 0 represents that the two strings are completely dissimilar. When the similarity of the two images is smaller than the preset similarity, the same environment scene corresponding to the first image and the second image can be determined to have environmental change to a certain extent, and then the environment monitoring department can be informed of the environmental change to remind the environment monitoring department of determining that the environment is improved or the environment is further deteriorated, so that corresponding environment countermeasures can be made. The mapping manner and the preset similarity may be determined based on the actual application scenario, which is not limited herein.
In the embodiment of the application, based on the fixed object information of the first image and the image information of the second image, whether the first image and the second image are images of the same environmental scene can be determined, so that the environmental change of the environmental scene is determined based on different images of the same environmental scene. Wherein determining whether the first image and the second image are images of the same environmental scene based on a common node in the blockchain network may improve accuracy of determining whether the first image and the second image are images of the same environmental scene. Further, the similarity of the first image and the second image is determined based on the hash value of the first characteristic image and the hash value of the second characteristic image, so that the environment change can be detected more accurately, and the applicability is high.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an environmental monitoring device according to an embodiment of the present application. The device 1 provided in the embodiment of the application includes:
the receiving module 11 is configured to receive a first image submitted by an environmental monitoring device and a comparison image identifier, and obtain fixed object information of the first image, where the comparison image identifier is used to mark a second image that is compared with the first image, and the fixed object information includes contour information, position information, and a fixed object class;
a first obtaining module 12, configured to obtain image information of the second image from a target block of the blockchain network based on the comparison image identifier;
a transmitting module 13, configured to transmit the fixed object information and the image information to a consensus node, so that the consensus node verifies whether the first image and the second image are images of the same environmental scene, and transmits a signature confirmation message after the verification is passed;
a second obtaining module 14, configured to receive the signature verification message and obtain, when the signature verification message meets a preset consensus policy, a hash value of a first feature image of the second image from the target block;
A first determining module 15, configured to determine a hash value of the second feature image of the first image;
a second determining module 16, configured to determine a similarity between the first image and the second image based on the hash value of the first feature image and the hash value of the second feature image, and send an environmental change notification to an environmental monitoring department when the similarity is less than a preset similarity.
In some possible embodiments, the hash value of the first feature image is an average hash value; the first determining module 15 includes:
a first processing unit 151, configured to scale the second feature image to obtain a first scaled feature image with a size of 8×8;
a second processing unit 152, configured to convert the first scaled feature image into a 256-level first gray feature image, and determine a first average value of gray values of all pixels of the first gray feature image;
a first determining unit 153, configured to compare each gray value in the first gray feature image with the first average value to determine a hash value of a second feature image of the first image, where the hash value of the second feature image includes 64 bits, one bit corresponds to one gray value of the first gray feature image, and a bit when any gray value of the first gray feature image is greater than the first average value is recorded as 1, otherwise, is recorded as 0.
In some possible embodiments, the hash value of the first feature image is a perceptual hash value; the first determining module 15 includes:
a third processing unit 154, configured to scale the second feature image to obtain a second scaled feature image with a size of 32×32, and convert the second scaled feature image into a second gray feature image with 256 steps;
a fourth processing unit 155, configured to perform discrete cosine transform on the second gray scale feature image to obtain a discrete feature image with a size of 32×32, and determine a third scaled feature image with a size of 8×8 from an upper left corner of the discrete feature image;
a second determining unit 156, configured to determine a second average value of frequency domain values of all pixels of the third scaled feature image, and compare each frequency domain value in the third scaled feature image with the second average value to determine a hash value of the second feature image of the first image, where the hash value of the second feature image includes 64 bits, one bit corresponds to one frequency domain value of the third scaled feature image, and a bit record when any frequency domain value of the third scaled feature image is greater than the second average value is 1, otherwise, is 0.
In some possible embodiments, the second determining module 16 includes:
a third determining unit 161 configured to determine the number of data bits that are different from the hash value of the second feature image and the hash value of the first feature image;
a fourth determining unit 162 configured to determine a ratio of the number of the different data bits to the number of bits of the hash value of the first feature image or the hash value of the second feature image, and determine a similarity between the first image and the second image based on the ratio.
In some possible embodiments, the hash value of the first feature image is a fuzzy hash value; the first determining module 15 includes:
a fifth processing unit 157, configured to sort the pixels of the second feature image according to a preset arrangement sequence, so as to obtain a pixel sequence;
a sixth processing unit 158, configured to divide the pixel sequence to obtain a plurality of pixel sub-sequences, where a sum of data lengths of the plurality of pixel sub-sequences is not smaller than a data length of the pixel sequence;
and a connection unit 159 configured to determine a hash value of each sub-sequence of pixels, and connect the hash values of each sub-sequence of pixels to obtain a hash value of the second feature image of the first image.
In some possible embodiments, the second determining module 16 includes:
a fifth determining unit 163 for determining a weighted editing distance from the hash value of the second feature image to the hash value of the first feature image;
a sixth determining unit 164 configured to determine a total data length of a data length of the hash value of the first feature image and a data length of the hash value of the second feature image;
a seventh determining unit 165, configured to determine a ratio of the weighted editing distance to the total data length, and perform numerical mapping on the ratio based on a preset mapping rule, to obtain a similarity between the first image and the second image.
In some possible embodiments, the environmental monitoring device further submits a shooting date of the first image; the above device 1 further comprises:
the third obtaining module 17 is further configured to obtain a timestamp in a block header of the target block, and determine a storage date of the image information based on the timestamp;
the comparison module 18 is further configured to compare a shooting date of the first image with a storage date of the image information, and if the shooting date is after the storage date and a date interval between the shooting date and the storage date is greater than a preset date interval, execute the step of transmitting the fixed object information and the image information to a consensus node.
In a specific implementation, the device 1 may execute an implementation manner provided by each step in fig. 2 through each built-in functional module, and specifically, the implementation manner provided by each step may be referred to, which is not described herein again.
In the embodiment of the application, based on the fixed object information of the first image and the image information of the second image, whether the first image and the second image are images of the same environmental scene can be determined, so that the environmental change of the environmental scene is determined based on different images of the same environmental scene. Wherein determining whether the first image and the second image are images of the same environmental scene based on a common node in the blockchain network may improve accuracy of determining whether the first image and the second image are images of the same environmental scene. Further, the similarity of the first image and the second image is determined based on the hash value of the first characteristic image and the hash value of the second characteristic image, so that the environment change can be detected more accurately, and the applicability is high.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an apparatus according to an embodiment of the present application. As shown in fig. 9, the apparatus 1000 in this embodiment may include: processor 1001, network interface 1004, and memory 1005, and in addition, the above device 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display (Display), a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface, among others. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may also optionally be at least one storage device located remotely from the processor 1001. As shown in fig. 9, an operating system, a network communication module, a user interface module, and a device control application may be included in a memory 1005, which is one type of computer-readable storage medium.
In the apparatus 1000 shown in fig. 9, the network interface 1004 may provide a network communication function; while user interface 1003 is primarily used as an interface for providing input to a user; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
receiving a first image submitted by environment monitoring equipment and a comparison image identifier, and acquiring fixed object information of the first image, wherein the comparison image identifier is used for marking a second image which is compared with the first image, and the fixed object information comprises contour information, position information and a fixed object type;
acquiring image information of the second image from a target block of the blockchain network based on the comparison image identifier;
transmitting the fixed object information and the image information to a consensus node so that the consensus node verifies whether the first image and the second image are images of the same environmental scene and transmits a signature confirmation message after the verification is passed;
receiving the signature confirmation message and acquiring a hash value of a first characteristic image of the second image from the target block when the signature confirmation message meets a preset consensus strategy;
Determining a hash value of a second feature image of the first image;
and determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than a preset similarity.
It should be appreciated that in some possible embodiments, the processor 1001 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), off-the-shelf programmable gate arrays (field-programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory may include read only memory and random access memory and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
In a specific implementation, the device 1000 may execute, through each functional module built in the device, an implementation provided by each step in fig. 2, and specifically, the implementation provided by each step may be referred to, which is not described herein again.
In the embodiment of the application, based on the fixed object information of the first image and the image information of the second image, whether the first image and the second image are images of the same environmental scene can be determined, so that the environmental change of the environmental scene is determined based on different images of the same environmental scene. Wherein determining whether the first image and the second image are images of the same environmental scene based on a common node in the blockchain network may improve accuracy of determining whether the first image and the second image are images of the same environmental scene. Further, the similarity of the first image and the second image is determined based on the hash value of the first characteristic image and the hash value of the second characteristic image, so that the environment change can be detected more accurately, and the applicability is high.
The embodiments of the present application further provide a computer readable storage medium, where a computer program is stored and executed by a processor to implement the method provided by each step in fig. 2, and specifically refer to the implementation manner provided by each step, which is not described herein.
The computer readable storage medium may be an internal storage unit of the task processing device provided in any one of the foregoing embodiments, for example, a hard disk or a memory of an electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the electronic device. The computer readable storage medium may also include a magnetic disk, an optical disk, a read-only memory (ROM), a random access memory (random access memory, RAM), or the like. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. The computer-readable storage medium is used to store the computer program and other programs and data required by the electronic device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
The terms "first," "second," and the like in the claims and specification and drawings of this application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (10)

1. An environmental monitoring method for use in a blockchain network, the method comprising:
receiving a first image submitted by environment monitoring equipment and a comparison image identifier, and obtaining fixed object information of the first image, wherein the comparison image identifier is used for marking a second image which is compared with the first image, and the fixed object information comprises contour information, position information and a fixed object type;
Determining a target block with a mark consistent with the comparison image mark from each block of the block chain network, and analyzing the target block to acquire image information of a second image from a block body of the target block;
transmitting the fixed target information of the first image and the image information of the second image to a consensus node, so that the consensus node verifies whether the first image and the second image are images of the same environment scene or not and transmits a signature confirmation message after verification is passed, and the consensus node matches the fixed target information of the first image with the image information of the second image, and if the image information of the second image contains an environment object with the same outline, position and fixed target category of the fixed target in the first image, the consensus node confirms that the environment scene of the first image and the environment scene of the second image are the same environment scene;
receiving the signature confirmation message and acquiring a hash value of a first characteristic image of the second image from the target block when the signature confirmation message meets a preset consensus strategy;
determining a hash value of a second feature image of the first image;
And determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than a preset similarity.
2. The method of claim 1, wherein the hash value of the first feature image is an average hash value; the determining the hash value of the second feature image of the first image includes:
scaling the second characteristic image to obtain a first scaled characteristic image with the size of 8 times 8;
converting the first scaling characteristic image into a 256-level first gray characteristic image, and determining a first average value of gray values of all pixel points of the first gray characteristic image;
comparing each gray value in the first gray feature image with the first average value to determine a hash value of a second feature image of the first image, wherein the hash value of the second feature image comprises 64 bits, one bit corresponds to one gray value of the first gray feature image, and the bit corresponding to when any gray value of the first gray feature image is larger than the first average value is recorded as 1, otherwise, is recorded as 0.
3. The method of claim 1, wherein the hash value of the first feature image is a perceptual hash value; the determining the hash value of the second feature image of the first image includes:
scaling the second characteristic image to obtain a second scaling characteristic image with the size of 32 times 32, and converting the second scaling characteristic image into a 256-order second gray characteristic image;
performing discrete cosine transform on the second gray level characteristic image to obtain a discrete characteristic image with the size of 32 times 32, and determining a third scaling characteristic image with the size of 8 times 8 from the upper left corner of the discrete characteristic image;
determining a second average value of frequency domain values of all pixels of the third scaling feature image, and comparing each frequency domain value in the third scaling feature image with the second average value to determine a hash value of a second feature image of the first image, wherein the hash value of the second feature image comprises 64 bits, one bit corresponds to one frequency domain value of the third scaling feature image, and the bit corresponding to when any frequency domain value of the third scaling feature image is larger than the second average value is recorded as 1, otherwise, is recorded as 0.
4. A method according to claim 2 or 3, wherein the determining the similarity of the first image and the second image based on the hash value of the first feature image and the hash value of the second feature image comprises:
determining the number of different data bits in the hash value of the second characteristic image and the hash value of the first characteristic image;
determining a ratio of the number of the different data bits to the number of bits of the hash value of the first feature image or the hash value of the second feature image, and determining the similarity of the first image and the second image based on the ratio.
5. The method of claim 1, wherein the hash value of the first feature image is a fuzzy hash value; the determining the hash value of the second feature image of the first image includes:
ordering all the pixels of the second characteristic image based on a preset arrangement sequence to obtain a pixel sequence;
dividing the pixel point sequence to obtain a plurality of pixel point sub-sequences, wherein the sum of the data lengths of the pixel point sub-sequences is not smaller than the data length of the pixel point sequence;
And respectively determining the hash value of each pixel sub-sequence, and connecting the hash values of each pixel sub-sequence to obtain the hash value of the second characteristic image of the first image.
6. The method of claim 5, wherein the determining the similarity of the first image and the second image based on the hash value of the first feature image and the hash value of the second feature image comprises:
determining a weighted editing distance from the hash value of the second characteristic image to the hash value of the first characteristic image;
determining a total data length of the hash value of the first feature image and the data length of the hash value of the second feature image;
and determining the ratio of the weighted editing distance to the total data length, and carrying out numerical mapping on the ratio based on a preset mapping rule to obtain the similarity of the first image and the second image.
7. The method of claim 1, wherein the environmental monitoring device also submits a date of capture of the first image; before the fixed target information and the image information are sent to the consensus node, the method further comprises:
Acquiring a time stamp in a block header of the target block, and determining a storage date of the image information based on the time stamp;
comparing the shooting date of the first image with the storage date of the image information, and if the shooting date is after the storage date and the date interval between the shooting date and the storage date is larger than a preset date interval, executing the step of sending the fixed object information and the image information to a consensus node.
8. An environmental monitoring device, the device comprising:
the device comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving a first image submitted by environment monitoring equipment and a comparison image identifier, obtaining fixed object information of the first image, wherein the comparison image identifier is used for marking a second image which is compared with the first image, and the fixed object information comprises contour information, position information and a fixed object category;
the first acquisition module is used for determining a target block with a mark consistent with the comparison image mark from all blocks of the blockchain network, and analyzing the target block to acquire image information of a second image from a block body of the target block;
The sending module is used for sending the fixed target object information of the first image and the image information of the second image to a consensus node so that the consensus node verifies whether the first image and the second image are images of the same environment scene or not and sends a signature confirmation message after the verification is passed, the consensus node matches the fixed target object information of the first image with the image information of the second image, and if the image information of the second image contains environment objects with the same outline, position and fixed target object category of the fixed target object in the first image, the consensus node confirms that the environment scene of the first image and the environment scene of the second image are the same environment scene;
the second acquisition module is used for receiving the signature confirmation message and acquiring a hash value of a first characteristic image of the second image from the target block when the signature confirmation message meets a preset consensus strategy;
a first determining module, configured to determine a hash value of a second feature image of the first image;
and the second determining module is used for determining the similarity of the first image and the second image based on the hash value of the first characteristic image and the hash value of the second characteristic image, and sending an environment change notification to an environment monitoring department when the similarity is smaller than a preset similarity.
9. An apparatus comprising a processor and a memory, the processor and the memory being interconnected;
the memory is for storing a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is executed by a processor to implement the method of any one of claims 1 to 7.
CN201911165967.4A 2019-11-25 2019-11-25 Environment monitoring method, device, equipment and storage medium Active CN110942021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911165967.4A CN110942021B (en) 2019-11-25 2019-11-25 Environment monitoring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911165967.4A CN110942021B (en) 2019-11-25 2019-11-25 Environment monitoring method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110942021A CN110942021A (en) 2020-03-31
CN110942021B true CN110942021B (en) 2024-01-16

Family

ID=69907481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911165967.4A Active CN110942021B (en) 2019-11-25 2019-11-25 Environment monitoring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110942021B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814807B (en) * 2020-07-16 2023-10-24 抖音视界有限公司 Method, apparatus, electronic device, and computer-readable medium for processing image
CN111882536A (en) * 2020-07-24 2020-11-03 富德康(北京)科技股份有限公司 Method for monitoring quantity of bulk cargo based on picture comparison
CN112115295A (en) * 2020-08-27 2020-12-22 广州华多网络科技有限公司 Video image detection method and device and electronic equipment
CN112215302A (en) * 2020-10-30 2021-01-12 Oppo广东移动通信有限公司 Image identification method and device and terminal equipment
CN113032594B (en) * 2021-02-26 2023-12-08 广东核电合营有限公司 Label image storage method, apparatus, computer device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241897A (en) * 2018-08-29 2019-01-18 深圳华远云联数据科技有限公司 Processing method, device, gateway and the storage medium of monitoring image
CN109819017A (en) * 2018-12-25 2019-05-28 中链科技有限公司 Environmental monitoring and data processing method and device based on block chain
CN110309704A (en) * 2019-04-30 2019-10-08 泸州市气象局 A kind of extreme weather real-time detection method, system and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536176B2 (en) * 2015-03-23 2017-01-03 International Business Machines Corporation Environmental-based location monitoring

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241897A (en) * 2018-08-29 2019-01-18 深圳华远云联数据科技有限公司 Processing method, device, gateway and the storage medium of monitoring image
CN109819017A (en) * 2018-12-25 2019-05-28 中链科技有限公司 Environmental monitoring and data processing method and device based on block chain
CN110309704A (en) * 2019-04-30 2019-10-08 泸州市气象局 A kind of extreme weather real-time detection method, system and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
关于对图像哈希算法的研究与应用;姚永明 等;西安文理学院学报(自然科学版);19(5);36-39 *

Also Published As

Publication number Publication date
CN110942021A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
CN110942021B (en) Environment monitoring method, device, equipment and storage medium
CN110969207B (en) Electronic evidence processing method, device, equipment and storage medium
CN100428782C (en) Information processing method and apparatus
US10726262B2 (en) Imaging support system, device and method, and imaging terminal
EP3234845B1 (en) Model anti-collusion watermark
JP2007249516A (en) Image data recording method, operation result recording method by image data, image data recording device, and operation result recording system by image data
CN113989627B (en) City prevention and control image detection method and system based on asynchronous federal learning
CN104767816A (en) Photography information collecting method, device and terminal
WO2019123988A1 (en) Calibration data generating device, calibration data generating method, calibration system, and control program
CN111104872A (en) GF-2 image integrity authentication method applying SIFT and SVD perceptual hashing
CN110929230B (en) Work management method, device, equipment and storage medium
CN110135413B (en) Method for generating character recognition image, electronic equipment and readable storage medium
CN1988583A (en) Image reading apparatus, electronic document generation method, and storing medium storing electronic document generation program
Ruban et al. The model and the method for forming a mosaic sustainable marker of augmented reality
Wang et al. RST invariant fragile watermarking for 2D vector map authentication
Marçal et al. A steganographic method for digital images robust to RS steganalysis
CN106713297A (en) Electronic data fixing platform based on cloud service
JP7277912B2 (en) Hash chain use data non-falsification proof system and data management device therefor
CN109819138B (en) Method and system for monitoring field sampling
CN111125141A (en) National power grid asset digital evidence storing and verifying method and equipment based on block chain
CN109657487A (en) Image processing method, image authentication method and its device
CN105023235A (en) Electronic chart watermarking method based on space redundancy relation
JP2012527682A (en) Image file generation method for forgery / alteration verification and image file forgery / alteration verification method
JP2009200754A (en) Digital watermarking device, digital watermark verification device, digital watermarking method, and digital watermark verification method
JP6391006B2 (en) Material inspection device, material inspection method, and material inspection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant