CN113114982B - Internet of things data transmission method and system - Google Patents

Internet of things data transmission method and system Download PDF

Info

Publication number
CN113114982B
CN113114982B CN202110272099.0A CN202110272099A CN113114982B CN 113114982 B CN113114982 B CN 113114982B CN 202110272099 A CN202110272099 A CN 202110272099A CN 113114982 B CN113114982 B CN 113114982B
Authority
CN
China
Prior art keywords
color
data
picture
monitoring video
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110272099.0A
Other languages
Chinese (zh)
Other versions
CN113114982A (en
Inventor
李果
沈伟
秦倩
陈旭
赵崇林
张海琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Dongxin Yilian Technology Co ltd
Original Assignee
Guangxi Dongxin Yilian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Dongxin Yilian Technology Co ltd filed Critical Guangxi Dongxin Yilian Technology Co ltd
Priority to CN202110272099.0A priority Critical patent/CN113114982B/en
Publication of CN113114982A publication Critical patent/CN113114982A/en
Application granted granted Critical
Publication of CN113114982B publication Critical patent/CN113114982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Abstract

The invention is suitable for the technical field of Internet of things, and particularly relates to a data transmission method and system of the Internet of things, wherein the method comprises the following steps: acquiring thermal imaging monitoring video data; extracting monitoring video pictures in the thermal imaging monitoring video data frame by frame; performing line processing on each frame of monitoring video picture to obtain a characteristic picture and a color distribution data packet; and packaging all the characteristic pictures and the color distribution data packets, sending the characteristic pictures and the color distribution data packets to a server, and storing the characteristic pictures and the color distribution data packets. The method comprises the steps of firstly extracting collected videos frame by frame, then performing characterization processing on each frame of picture in the videos, only storing main lines in the pictures, then simplifying and storing color information in the pictures, finally sending the line pictures and the color information to a server, and restoring by using the server to obtain a characterized monitoring picture, wherein the monitoring accuracy is ensured, the data transmission quantity is reduced, and the transmission congestion condition is avoided.

Description

Internet of things data transmission method and system
Technical Field
The invention belongs to the technical field of Internet of things, and particularly relates to a data transmission method and system of the Internet of things.
Background
The Internet of Things (Internet of Things, IOT for short) is used for collecting any object or process needing monitoring, connection and interaction in real time through various devices and technologies such as various information sensors, radio frequency identification technologies, global positioning systems, infrared sensors and laser scanners, collecting various required information such as sound, light, heat, electricity, mechanics, chemistry, biology and position of the object or process, realizing ubiquitous connection of the object and the person through various possible network accesses, and realizing intelligent sensing, identification and management of the object and the process. The internet of things is an information bearer based on the internet, a traditional telecommunication network and the like, and all common physical objects which can be independently addressed form an interconnected network.
In the current breeding industry, the bred organisms in the farms need to be monitored by monitoring, and particularly for the farms with a large scale system, the farms are generally equipped with a plurality of high-definition monitoring devices, so that on one hand, the current states of the bred organisms can be visually observed in the background by equipping the high-definition monitoring devices, and on the other hand, when performing remote guidance and remote treatment, the monitoring devices and remote assistance visitors need to be used to provide corresponding video information.
However, although the video information acquired by the high-definition monitoring device has high definition, the amount of the video data information acquired by the high-definition monitoring device is large, and the number of the monitoring devices is large, so that the amount of the information to be transmitted in unit time is large, a high-bandwidth transmission channel needs to be arranged, the cost is undoubtedly increased, and the problem of transmission channel congestion occurs when a low-bandwidth transmission channel is adopted.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a system for transmitting data of the Internet of things, and aims to solve the problems in the background technology.
The embodiment of the invention is realized in such a way that a data transmission method of the Internet of things comprises the following steps:
acquiring thermal imaging monitoring video data;
extracting a monitoring video picture in the thermal imaging monitoring video data frame by frame;
performing line processing on each frame of monitoring video picture to obtain a characteristic picture and a color distribution data packet, wherein the characteristic picture is used for representing the appearance contour of cultured organisms and culture scenes in the monitoring video picture, and the color distribution data packet comprises the distribution condition of colors in each area in the characteristic picture;
and packaging all the characteristic pictures and the color distribution data packets, and sending the characteristic pictures and the color distribution data packets to the server, so that the server can generate and store the characteristic monitoring video data according to the characteristic pictures and the color distribution data packets.
Preferably, the step of extracting the surveillance video picture in the thermal imaging surveillance video data frame by frame specifically includes:
analyzing thermal imaging monitoring video data to obtain a plurality of continuous monitoring video pictures;
naming each frame of monitoring video picture according to a time sequence;
and calculating the similarity of two adjacent monitoring video pictures according to the named sequence, and representing the continuous monitoring video pictures with the similarity higher than a set similarity threshold value by using a video clip characteristic picture, wherein the video clip characteristic picture is any one of the continuous monitoring video pictures.
Preferably, the step of performing line processing on each frame of the surveillance video picture to obtain the characteristic picture and the color distribution data packet specifically includes:
reading color data of each pixel point in a monitoring video picture;
calculating the color difference value between color data of adjacent pixel points, and dividing the pixel points with the color difference value not higher than a set color difference threshold value into the same color area;
newly building a blank picture, drawing the boundary between different color areas in the monitoring video picture in the blank picture, numbering each color area, and obtaining a characteristic picture;
a color data distribution data packet is generated.
Preferably, the step of generating the color data distribution data packet specifically includes:
reading color data of each pixel point in each color area;
calculating the average value of the color data in each color area to obtain average color data;
and establishing a mapping relation table between each color area number and corresponding average color data, and generating a color data distribution data packet according to the mapping relation table.
Preferably, the step of generating the characterized surveillance video data according to the characterized picture and the color distribution data packet specifically includes:
reading a data packet according to the characteristic picture and color distribution;
coloring each region in the characteristic picture according to the distribution condition of colors in each region contained in the color distribution data packet;
and synthesizing the characterized monitoring video data according to all the characterized pictures.
Preferably, after the step of performing the coloring process on each region in the picture according to the distribution of colors in each region included in the color distribution data packet as features, the method further includes performing a gradation process on the dividing line.
Preferably, after the step of acquiring the thermal imaging monitoring video data, the step of compressing the thermal imaging monitoring video data is further included.
Another object of an embodiment of the present invention is to provide an internet of things data transmission system, including:
the information acquisition module is used for acquiring thermal imaging monitoring video data;
the image extraction module is used for extracting the monitoring video images in the thermal imaging monitoring video data frame by frame;
the image processing module is used for carrying out line processing on each frame of monitoring video image to obtain a characteristic image and a color distribution data packet, wherein the characteristic image is used for representing the appearance contour of the cultured organisms and the cultured scene in the monitoring video image, and the color distribution data packet comprises the distribution condition of colors in each area in the characteristic image;
and the data sending module is used for packaging all the characteristic pictures and the color distribution data packets and sending the characteristic pictures and the color distribution data packets to the server, so that the server can generate and store the characteristic monitoring video data according to the characteristic pictures and the color distribution data packets.
Preferably, the picture extracting module includes:
the analysis unit is used for analyzing the thermal imaging monitoring video data to obtain a plurality of continuous monitoring video pictures;
the naming unit is used for naming each frame of monitoring video picture according to the time sequence;
and the content merging unit is used for calculating the similarity of two adjacent monitoring video pictures according to the named sequence, and representing the continuous monitoring video pictures with the similarity higher than a set similarity threshold value by one video clip characteristic picture, wherein the video clip characteristic picture is any one of the continuous monitoring video pictures.
Preferably, the image processing module includes:
the data reading unit is used for reading the color data of each pixel point in the monitoring video picture;
the color difference calculating unit is used for calculating the color difference value between color data of adjacent pixel points and dividing the pixel points of which the color difference value is not higher than a set color difference threshold value into the same color area;
the characteristic processing unit is used for newly building a blank picture, drawing a boundary line between different color areas in the monitoring video picture in the blank picture, and numbering each color area to obtain a characteristic picture;
and the data packet generating unit is used for generating the color data distribution data packet.
The data transmission method of the internet of things provided by the embodiment of the invention comprises the steps of firstly extracting collected videos frame by frame, then performing characterization processing on each frame of picture in the videos, only storing main lines in the pictures, then simplifying and storing color information in the pictures, finally sending the line pictures and the color information to a server, and restoring by using the server to obtain a characterized monitoring picture, thereby ensuring the monitoring accuracy, reducing the data transmission amount and avoiding the transmission congestion condition.
Drawings
Fig. 1 is a network implementation environment diagram of an internet of things data transmission system according to an embodiment of the present invention;
fig. 2 is a flowchart of a data transmission method of the internet of things according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps for extracting a surveillance video picture from thermal imaging surveillance video data frame by frame according to an embodiment of the present invention;
fig. 4 is a flowchart of a step of performing line processing on each frame of surveillance video picture to obtain a feature picture and a color distribution data packet according to an embodiment of the present invention;
FIG. 5 is a flowchart of the steps for generating a color data distribution packet according to an embodiment of the present invention;
FIG. 6 is a flowchart of the steps provided in an embodiment of the present invention for generating the characterized surveillance video data from the characterized picture and color distribution data packet;
fig. 7 is an architecture diagram of an internet of things data transmission system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of the present application.
Fig. 1 is a network implementation environment diagram of an internet of things data transmission system according to an embodiment of the present invention, and as shown in fig. 1, the network implementation environment includes a camera, an internet of things data transmission system, and a server.
The server may be an independent physical server or terminal, may also be a server cluster formed by a plurality of physical servers, and may be a cloud server providing basic cloud computing services such as a cloud server, a cloud database, a cloud storage, a CDN, and the like.
The camera is also called a computer camera, a computer eye, an electronic eye and the like, is a video input device, and is widely applied to aspects such as video conferences, telemedicine, real-time monitoring and the like. In the present invention, a camera with a thermal imaging function is used.
In the network implementation environment diagram, the number of the cameras can be one or more, the cameras are all connected with the internet of things data transmission system, the internet of things data transmission system is connected with the server, and the data collected by the cameras are sent to the server after being processed by the internet of things data transmission system.
In the existing internet of things, although the definition of video information acquired by a high-definition monitoring device is high, the information quantity of the acquired video data is large, and secondly, the number of the set monitoring devices is large, so that the information quantity required to be transmitted in unit time is huge, a transmission channel with high bandwidth needs to be set, the cost is increased undoubtedly, and the problem of congestion of the transmission channel occurs when the transmission channel with low bandwidth is adopted.
In order to solve the above problems, in the embodiments of the present invention, first, a collected video is extracted frame by frame, then, each frame of picture in the video is subjected to characterization processing, only main lines in the picture are stored, then, color information in the picture is simplified and stored, finally, the line graph and the color information are sent to a server, and the server is used for restoring, so that a characterized monitoring picture can be obtained, which not only ensures monitoring accuracy, but also reduces data transmission amount, and avoids transmission congestion.
Specifically, as shown in fig. 2, it is a flowchart of a data transmission method of the internet of things according to an embodiment of the present invention;
the data transmission method of the Internet of things specifically comprises the following steps:
and S100, acquiring thermal imaging monitoring video data.
In this step, for the aquaculture industry, in order to monitor cultured organisms in an all-around manner, a plurality of monitoring devices, such as cameras, are usually provided, and each camera acquires a corresponding monitoring video.
S200, extracting the monitoring video pictures in the thermal imaging monitoring video data frame by frame.
In this step, the surveillance video data is read first, the video recorded in the surveillance video data is split into one frame and one frame of pictures, and each frame of picture is numbered in sequence, so as to facilitate subsequent processing and integration.
S300, performing line processing on each frame of monitoring video picture to obtain a characteristic picture and a color distribution data packet.
In the step, the read monitoring video pictures are loaded, then image recognition is carried out, line striping processing is carried out on each frame of monitoring video pictures, fuzzification processing is carried out in the process, similar colors are classified to be represented by the same color, after the line striping processing is carried out, only lines exist in the monitoring video pictures, a plurality of closed areas are formed among the lines, and a color distribution data packet is generated by taking the original monitoring video pictures as the basis, wherein the color distribution data packet comprises the color in each closed area.
And S400, packaging all the characteristic pictures and the color distribution data packets, and sending the packaged characteristic pictures and the color distribution data packets to a server, so that the server can generate and store the characteristic monitoring video data according to the characteristic pictures and the color distribution data packets.
In this step, after the frame-by-frame processing is performed on the surveillance video pictures, a series of continuous linear pictures and a color distribution data packet corresponding to each linear picture are obtained, the corresponding linear pictures and the color distribution data packets are encapsulated according to the sequence of the numbers of the surveillance video pictures and are integrally sent to the server, and the server generates a corresponding simplified surveillance video picture according to the linear pictures and the color distribution data packets.
As shown in fig. 3, as a preferred embodiment of the present invention, the step of extracting a surveillance video picture in thermal imaging surveillance video data frame by frame specifically includes:
s201, analyzing thermal imaging monitoring video data to obtain a plurality of continuous monitoring video pictures.
In the step, after the monitoring video data are read, the videos contained in the monitoring video data are analyzed frame by frame, and finally a plurality of continuous monitoring video pictures are obtained.
S202, naming each frame of monitoring video picture according to the time sequence.
In this step, after a plurality of consecutive surveillance video pictures are obtained, each picture is numbered so as to determine the sequence of subsequent processing and integration.
S203, calculating the similarity of two adjacent monitoring video pictures according to the named sequence, and representing the continuous monitoring video pictures with the similarity higher than a set similarity threshold value by using a video clip characteristic picture, wherein the video clip characteristic picture is any one of the continuous monitoring video pictures.
In the step, according to the sequence of numbering, two adjacent monitoring video pictures are read each time, the similarity between the two monitoring video pictures is calculated, when the similarity is calculated, the two monitoring video pictures are subjected to pixelation processing to obtain two groups of pixelation processing images, then the two groups of pixelation processing images are subjected to gray level processing to obtain gray level processing images, then the two groups of gray level processing images are subjected to pixel comparison to obtain the pixel variation quantity between imagination and determine the similarity between the two monitoring video pictures, after the similarity is determined, the continuous monitoring video pictures with the similarity higher than a set similarity threshold are represented by one video fragment characteristic picture, and therefore the fragments of the original monitoring video pictures in the static state are replaced by one picture to achieve the purpose of compressing the size of the whole data.
As shown in fig. 4, as a preferred embodiment of the present invention, the step of performing line processing on each frame of surveillance video picture to obtain a characterized picture and a color distribution data packet specifically includes:
s301, reading color data of each pixel point in the monitoring video picture.
In this step, after the monitored video picture is read, the distribution of each pixel in the monitored video picture is identified, thereby facilitating the subsequent processing.
S302, calculating a color difference value between color data of adjacent pixel points, and dividing the pixel points with the color difference value not higher than a set color difference threshold value into the same color area.
In this step, after identification, the color data of each pixel point is in a numerical value form, and for different cameras, the number of colors is different, and the colors are generally divided into 8bpp, 16bpp, 24bpp and 48bpp, so that each color has a corresponding numerical value, thereby calculating the color difference value between adjacent pixel points, comparing the calculated color difference value with a preset color difference threshold value, when the color difference value is not higher than the color difference threshold value, it is indicated that the colors of the currently compared pixel points are similar, otherwise, it is indicated that the color difference between the adjacent pixel points is larger, and therefore, the color data is divided into two different color regions.
And S303, newly building a blank picture, drawing the boundary line between different color areas in the monitoring video picture in the blank picture, and numbering each color area to obtain the characteristic picture.
In this step, a blank picture is newly created, after pixels are identified and divided, adjacent pixel points with color difference values not higher than a color difference threshold are classified into the same region, so that a plurality of regions are formed, and color differences among the plurality of regions are large, so that a plurality of boundary lines are formed among the regions, and the boundary lines are drawn in the blank picture according to the same position, so that the pixels included in the blank picture need to be the same as the monitoring video picture, and then each color region is numbered, thereby obtaining a characterized picture.
S304, generating a color data distribution data packet.
In this step, a color distribution data packet is generated based on the original surveillance video picture.
As shown in fig. 5, as a preferred embodiment of the present invention, the step of generating a color data distribution data packet specifically includes:
s3041, reading the color data of each pixel point in each color region.
In this step, after the monitored video picture is read, the distribution of each pixel in the monitored video picture is identified, thereby facilitating the subsequent processing.
S3042, calculating an average value of the color data in each color region to obtain average color data.
In this step, after the process of generating the characterization picture, the characterization picture will have a plurality of color regions, and to improve the display accuracy, an average value of the pixel point color data in each color region is calculated, so that average color data in each color region can be obtained, and the color data is used for representing the original color of the current color region.
S3043, a mapping relationship table between each color region number and the corresponding average color data is established, and a color data distribution data packet is generated according to the mapping relationship table.
In this step, a mapping relationship table is established, so as to record the average color data corresponding to each color region, and finally the mapping relationship table is packaged into a color data distribution data packet.
As shown in fig. 6, as a preferred embodiment of the present invention, the step of generating the characterizing surveillance video data according to the characterizing picture and the color distribution data packet specifically includes:
s401, reading a data packet according to the characteristic picture and the color distribution.
In this step, the characterizing pictures and the color distribution data packet are directly read, as can be seen from the above, the characterizing pictures are consecutive, and each characterizing picture has its number, the number sequence is related to the time sequence, and the color information of each region in the characterizing pictures is recorded in the color distribution data packet.
And S402, coloring each region in the characteristic picture according to the distribution situation of the colors in each region contained in the color distribution data packet.
In this step, the characteristic picture is processed to be an image with only lines, and all areas surrounded by the lines do not have colors, and at this time, each area in the characteristic picture is colored according to the information recorded in the color distribution data packet, so that a series of continuous colored characteristic pictures are obtained.
And S403, synthesizing the characterized monitoring video data according to all the characterized pictures.
In the step, the characteristic pictures are sold according to the time sequence to be synthesized into the characteristic monitoring video data.
In this step, since only one color exists in each region at present, the problem of reduced thermal imaging accuracy is caused, and at this time, the connection portion between adjacent regions is subjected to gradient processing, so that the accuracy of displaying the characteristic picture is improved.
As shown in fig. 7, the data transmission system of the internet of things provided in the embodiment of the present invention includes:
the information acquisition module 100 is configured to acquire thermal imaging monitoring video data.
The picture extracting module 200 is configured to extract the surveillance video pictures in the thermal imaging surveillance video data frame by frame.
In the present system, the picture extraction module 200 first reads the surveillance video data, splits the video recorded in the surveillance video data into one frame and one frame of pictures, and numbers each frame of picture in sequence, so as to facilitate subsequent processing and integration.
The image processing module 300 is configured to perform line processing on each frame of the surveillance video image to obtain a characterized image and a color distribution data packet, where the characterized image is used to represent the outline of the living creatures and the breeding scenes in the surveillance video image, and the color distribution data packet includes the distribution of colors in each region in the characterized image.
In the system, the image processing module 300 loads the read surveillance video image, performs image recognition, and performs line striping on each frame of surveillance video image, in this process, performs fuzzification processing, classifies similar colors as being represented by the same color, after the line striping processing, only lines will exist in the surveillance video image, a plurality of closed areas will be formed between the lines, and a color distribution data packet is generated based on the original surveillance video image, wherein the color distribution data packet includes the color in each closed area.
And the data sending module 400 is configured to encapsulate all the featured pictures and the color distribution data packets, and send the encapsulated featured pictures and the color distribution data packets to the server, so that the server can generate and store the featured monitoring video data according to the featured pictures and the color distribution data packets.
In the system, after the data sending module 400 performs frame-by-frame processing on the surveillance video pictures, a series of continuous striped pictures and color distribution data packets corresponding to each striped picture are obtained, the corresponding striped pictures and color distribution data packets are encapsulated according to the sequence of the numbers of the surveillance video pictures and are sent to the server as a whole, and the server generates corresponding simplified surveillance video pictures according to the striped pictures and the color distribution data packets.
The picture extraction module 200 includes:
the parsing unit 201 is configured to parse thermal imaging surveillance video data to obtain multiple continuous surveillance video pictures.
A naming unit 202, configured to name each frame of the surveillance video pictures according to a time sequence.
The content merging unit 203 is configured to calculate similarity between two adjacent monitored video pictures according to the named sequence, and represent a continuous monitored video picture with similarity higher than a set similarity threshold value by using a video segment feature picture, where the video segment feature picture is any one of the continuous monitored video pictures.
The picture processing module comprises.
The data reading unit 301 is configured to read color data of each pixel point in the monitored video picture.
A color difference calculating unit 302, configured to calculate a color difference value between color data of adjacent pixels, and divide a pixel having a color difference value not higher than a set color difference threshold into a same color region.
The characterization processing unit 303 is configured to create a new blank picture, draw a boundary between different color regions in the monitoring video picture in the blank picture, and number each color region to obtain a characterization picture.
A packet generating unit 304 for generating a color data distribution packet.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent should be subject to the appended claims.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (10)

1. A data transmission method of the Internet of things is characterized by comprising the following steps:
acquiring thermal imaging monitoring video data;
extracting monitoring video pictures in the thermal imaging monitoring video data frame by frame;
performing line processing on each frame of monitoring video picture to obtain a characterized picture and a color distribution data packet, wherein the characterized picture is used for representing the appearance contours of cultured organisms and a culture scene in the monitoring video picture, and the color distribution data packet comprises the distribution condition of colors in each area in the characterized picture;
packaging all the characteristic pictures and the color distribution data packets, and sending the characteristic pictures and the color distribution data packets to a server, so that the server can generate and store characteristic monitoring video data according to the characteristic pictures and the color distribution data packets;
the step of performing line processing on each frame of monitoring video picture to obtain a characteristic picture and a color distribution data packet comprises the following steps:
dividing pixel points with color difference values not higher than a set color difference threshold into the same color area according to color data of each pixel point in the monitoring video picture;
reading color data of each pixel point in each color area; after the monitoring video picture is read, the distribution condition of each pixel point in the monitoring video picture is identified, so that the subsequent processing is facilitated;
calculating the average value of the color data in each color area to obtain average color data, after the process of generating the characteristic picture, a plurality of color areas exist in the characteristic picture, calculating the average value of the pixel point color data in each color area to obtain the average color data in each color area, wherein the color data is used for representing the original color of the current color area;
and establishing a mapping relation table between the serial number of each color area and the corresponding average color data, generating a color data distribution data packet according to the mapping relation table, establishing a mapping relation table so as to record the average color data corresponding to each color area, and finally packaging the mapping relation table into a color data distribution data packet.
2. The internet of things data transmission method according to claim 1, wherein the step of extracting the surveillance video picture in the thermal imaging surveillance video data frame by frame specifically comprises:
analyzing thermal imaging monitoring video data to obtain a plurality of continuous monitoring video pictures;
naming each frame of monitoring video picture according to a time sequence;
and calculating the similarity of two adjacent monitoring video pictures according to the named sequence, and representing the continuous monitoring video pictures with the similarity higher than a set similarity threshold value by using a video clip characteristic picture, wherein the video clip characteristic picture is any one of the continuous monitoring video pictures.
3. The internet of things data transmission method according to claim 1, wherein the step of performing line processing on each frame of monitoring video picture to obtain a characterized picture and a color distribution data packet specifically comprises:
reading color data of each pixel point in a monitoring video picture;
calculating the color difference value between color data of adjacent pixel points, and dividing the pixel points with the color difference value not higher than a set color difference threshold value into the same color area;
newly building a blank picture, drawing the boundary between different color areas in the monitoring video picture in the blank picture, numbering each color area, and obtaining a characteristic picture;
a color data distribution data packet is generated.
4. The internet of things data transmission method according to claim 3, wherein the step of generating the color data distribution data packet specifically comprises:
reading color data of each pixel point in each color area;
calculating the average value of the color data in each color area to obtain average color data;
and establishing a mapping relation table between each color area number and corresponding average color data, and generating a color data distribution data packet according to the mapping relation table.
5. The internet of things data transmission method according to claim 1, wherein the step of generating the characterized surveillance video data according to the characterized picture and the color distribution data packet specifically comprises:
reading a data packet according to the characteristic picture and color distribution;
coloring each region in the characteristic picture according to the distribution condition of the colors in each region contained in the color distribution data packet;
and synthesizing the characterized monitoring video data according to all the characterized pictures.
6. The data transmission method of the internet of things according to claim 5, wherein after the step of coloring each region in the characterization picture according to the color distribution condition in each region included in the color distribution data packet, the method further comprises a step of performing gradient processing on the boundary line.
7. The method for transmitting data of the internet of things according to claim 1, wherein after the step of acquiring the thermal imaging monitoring video data, the method further comprises compressing the thermal imaging monitoring video data.
8. An internet of things data transmission system, comprising:
the information acquisition module is used for acquiring thermal imaging monitoring video data;
the image extraction module is used for extracting the monitoring video images in the thermal imaging monitoring video data frame by frame;
the image processing module is used for carrying out line processing on each frame of monitoring video image to obtain a characteristic image and a color distribution data packet, wherein the characteristic image is used for representing the appearance contour of the cultured organisms and the culture scene in the monitoring video image, and the color distribution data packet comprises the distribution condition of colors in each area in the characteristic image;
the data sending module is used for packaging all the characteristic pictures and the color distribution data packets and sending the characteristic pictures and the color distribution data packets to the server, so that the server can generate and store characteristic monitoring video data according to the characteristic pictures and the color distribution data packets;
the image processing module carries out line processing on each frame of monitoring video image to obtain a characteristic image and a color distribution data packet, and the method comprises the following steps:
dividing pixel points with color difference values not higher than a set color difference threshold into the same color area according to color data of each pixel point in the monitoring video picture;
reading color data of each pixel point in each color area; after the monitoring video picture is read, the distribution condition of each pixel point in the monitoring video picture is identified, so that the subsequent processing is facilitated;
calculating the average value of the color data in each color area to obtain average color data, wherein after the process of generating the characteristic picture, a plurality of color areas exist in the characteristic picture, calculating the average value of the pixel point color data in each color area to obtain the average color data in each color area, and the color data is used for representing the original color of the current color area;
and establishing a mapping relation table between the serial number of each color area and the corresponding average color data, generating a color data distribution data packet according to the mapping relation table, establishing a mapping relation table so as to record the average color data corresponding to each color area, and finally packaging the mapping relation table into a color data distribution data packet.
9. The internet of things data transmission system of claim 8, wherein the picture extraction module comprises:
the analysis unit is used for analyzing the thermal imaging monitoring video data to obtain a plurality of continuous monitoring video pictures;
the naming unit is used for naming each frame of monitoring video picture according to the time sequence;
and the content merging unit is used for calculating the similarity of two adjacent monitoring video pictures according to the named sequence, and representing the continuous monitoring video pictures with the similarity higher than a set similarity threshold value by one video clip characteristic picture, wherein the video clip characteristic picture is any one of the continuous monitoring video pictures.
10. The internet of things data transmission system of claim 8, wherein the picture processing module comprises:
the data reading unit is used for reading the color data of each pixel point in the monitoring video picture;
the color difference calculating unit is used for calculating the color difference value between color data of adjacent pixel points and dividing the pixel points of which the color difference value is not higher than a set color difference threshold value into the same color area;
the characteristic processing unit is used for newly building a blank picture, drawing a boundary line between different color areas in the monitoring video picture in the blank picture, and numbering each color area to obtain a characteristic picture;
and the data packet generating unit is used for generating the color data distribution data packet.
CN202110272099.0A 2021-03-12 2021-03-12 Internet of things data transmission method and system Active CN113114982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110272099.0A CN113114982B (en) 2021-03-12 2021-03-12 Internet of things data transmission method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110272099.0A CN113114982B (en) 2021-03-12 2021-03-12 Internet of things data transmission method and system

Publications (2)

Publication Number Publication Date
CN113114982A CN113114982A (en) 2021-07-13
CN113114982B true CN113114982B (en) 2022-08-30

Family

ID=76711169

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110272099.0A Active CN113114982B (en) 2021-03-12 2021-03-12 Internet of things data transmission method and system

Country Status (1)

Country Link
CN (1) CN113114982B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2211601A2 (en) * 2009-01-27 2010-07-28 Omron Corporation Information display system and information display method for quality control of component-mounted substrate
CN101873414A (en) * 2010-05-17 2010-10-27 清华大学 Event video detection system based on hierarchical structure
CN103679756A (en) * 2013-12-26 2014-03-26 北京工商大学 Automatic target tracking method and system based on color and shape features
CN103986882A (en) * 2014-05-21 2014-08-13 福建歌航电子信息科技有限公司 Method for image classification, transmission and processing in real-time monitoring system
CN105528579A (en) * 2015-12-04 2016-04-27 中国农业大学 Milk cow breeding key process video extraction method and system based on image recognition
CN105550692A (en) * 2015-12-30 2016-05-04 南京邮电大学 Unmanned aerial vehicle automatic homing landing method based on landmark color and outline detection
CN110795595A (en) * 2019-09-10 2020-02-14 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4071701B2 (en) * 2003-11-11 2008-04-02 富士通株式会社 Color image compression method and color image compression apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2211601A2 (en) * 2009-01-27 2010-07-28 Omron Corporation Information display system and information display method for quality control of component-mounted substrate
CN101873414A (en) * 2010-05-17 2010-10-27 清华大学 Event video detection system based on hierarchical structure
CN103679756A (en) * 2013-12-26 2014-03-26 北京工商大学 Automatic target tracking method and system based on color and shape features
CN103986882A (en) * 2014-05-21 2014-08-13 福建歌航电子信息科技有限公司 Method for image classification, transmission and processing in real-time monitoring system
CN105528579A (en) * 2015-12-04 2016-04-27 中国农业大学 Milk cow breeding key process video extraction method and system based on image recognition
CN105550692A (en) * 2015-12-30 2016-05-04 南京邮电大学 Unmanned aerial vehicle automatic homing landing method based on landmark color and outline detection
CN110795595A (en) * 2019-09-10 2020-02-14 安徽南瑞继远电网技术有限公司 Video structured storage method, device, equipment and medium based on edge calculation
CN110996078A (en) * 2019-11-25 2020-04-10 深圳市创凯智能股份有限公司 Image acquisition method, terminal and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多兴趣区域融合的视频增强算法研究;梁龙飞等;《电视技术》;20130702(第13期);全文 *
基于颜色和边缘直方图的多目标跟踪;张雷等;《液晶与显示》;20160615(第06期);全文 *

Also Published As

Publication number Publication date
CN113114982A (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN107967677B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN108399052B (en) Picture compression method and device, computer equipment and storage medium
WO2014041864A1 (en) Object identifier
CN108984657A (en) Image recommendation method and apparatus, terminal, readable storage medium storing program for executing
US11849242B2 (en) Dynamically configured processing of a region of interest dependent upon published video data selected by a runtime configuration file
CN110599554A (en) Method and device for identifying face skin color, storage medium and electronic device
WO2022073282A1 (en) Motion recognition method based on feature interactive learning, and terminal device
CN111445487B (en) Image segmentation method, device, computer equipment and storage medium
KR20180119013A (en) Method and apparatus for retrieving image using convolution neural network
CN112818821A (en) Human face acquisition source detection method and device based on visible light and infrared light
CN112367522A (en) Image compression and decompression processing method and system
CN115035580A (en) Figure digital twinning construction method and system
CN113114982B (en) Internet of things data transmission method and system
CN116431857B (en) Video processing method and system for unmanned scene
CN112383824A (en) Video advertisement filtering method, device and storage medium
CN114095725B (en) Method and system for judging whether camera is abnormal
CN112308162A (en) Image big data similarity comparison method and system
CN113625864B (en) Virtual scene display method, system, equipment and storage medium based on Internet of things
CN110213457B (en) Image transmission method and device
CN112990076A (en) Data arrangement method and device based on artificial intelligence
CN112906592A (en) Passenger flow volume analysis method and system and electronic equipment
CN112950641A (en) Image processing method and device, computer readable storage medium and electronic device
CN112633192B (en) Gesture interaction face recognition temperature measurement method, system, equipment and medium
US11961273B2 (en) Dynamically configured extraction, preprocessing, and publishing of a region of interest that is a subset of streaming video data
CN115018698B (en) Image processing method and system for man-machine interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant