CN117651148A - Terminal management and control method for Internet of things - Google Patents

Terminal management and control method for Internet of things Download PDF

Info

Publication number
CN117651148A
CN117651148A CN202311441709.0A CN202311441709A CN117651148A CN 117651148 A CN117651148 A CN 117651148A CN 202311441709 A CN202311441709 A CN 202311441709A CN 117651148 A CN117651148 A CN 117651148A
Authority
CN
China
Prior art keywords
video
dynamic
static
internet
things
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311441709.0A
Other languages
Chinese (zh)
Inventor
吴跃平
贡献智
邓勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Information Technology Designing and Consulting Institute Co Ltd
Guangdong Unicom Communication Construction Co Ltd
Original Assignee
China Information Technology Designing and Consulting Institute Co Ltd
Guangdong Unicom Communication Construction Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Information Technology Designing and Consulting Institute Co Ltd, Guangdong Unicom Communication Construction Co Ltd filed Critical China Information Technology Designing and Consulting Institute Co Ltd
Priority to CN202311441709.0A priority Critical patent/CN117651148A/en
Publication of CN117651148A publication Critical patent/CN117651148A/en
Pending legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to the technical field of terminals of the Internet of things, in particular to a terminal management and control method of the Internet of things, which is used for receiving videos acquired by a plurality of terminals of the Internet of things through the Internet of things; calculating dynamic and static characterization coefficients of the video segments, and judging whether the video segments accord with preset dynamic and static difference standards or not; when the dynamic and static difference standard is not met, video frames in the video segment are coded frame by frame; when the dynamic and static difference standard is met, a plurality of judging areas are divided in the video segment, static areas and dynamic areas are identified, video frames in the video segment are extracted, the whole domain of the video frames which are not extracted is encoded, the extracted video frames are partially encoded, the receiving end receives encoded data, the static encoded data and the partially encoded data are combined and then decoded, and the video is obtained.

Description

Terminal management and control method for Internet of things
Technical Field
The invention relates to the technical field of terminals of the Internet of things, in particular to a management and control method of terminals of the Internet of things.
Background
With the continuous development of the internet of things technology, the internet of things technology is applied to various fields, and the internet of things terminal generally comprises an image acquisition terminal, various sensor terminals and the like and is used for acquiring various data, and the acquired data is forwarded through the internet of things, so that a receiving end can receive the data in the internet of things.
For example, chinese patent publication No.: CN101917483a discloses a method, a system and a device for implementing communication management and control of an internet of things terminal, which mainly comprise: setting a brand new communication management and control device of the terminal of the Internet of things, connecting with an authentication, authorization and accounting AAA server, and synchronizing the recorded communication state information of the terminal of the Internet of things to the communication management and control device of the terminal of the Internet of things by the AAA server; the communication management and control equipment of the Internet of things terminal locally updates the communication state information of the Internet of things terminal and synchronizes the communication state information of the Internet of things terminal to an industry application system. The panoramic monitoring of the communication state of the terminal of the Internet of things and the more comprehensive professional management of the terminal of the Internet of things can be realized, the operation and maintenance efficiency of the terminal of the Internet of things is improved, and the operation cost of the business of the Internet of things is effectively saved.
However, the prior art has the following problems,
when the image acquisition terminal is applied to the monitoring field, the dynamic conditions in the monitoring image are different, and particularly in a certain period, the monitoring image is high in homogeneity, and a large amount of flow is required to be consumed and bandwidth is occupied when the monitoring image is forwarded to a receiving end by the Internet of things.
Disclosure of Invention
Therefore, the invention provides a terminal management and control method of the internet of things, which is used for solving the problems that when an image acquisition terminal is applied to the monitoring field, dynamic conditions in a monitoring image are different, particularly, in a certain period, the monitoring image is high in homogeneity, a large amount of flow is required to be consumed when the monitoring image is forwarded to a receiving end by the internet of things, and bandwidth is occupied.
In order to achieve the above object, the present invention provides a method for controlling an internet of things terminal, comprising:
step S1, receiving videos acquired by a plurality of terminals of the Internet of things through the Internet of things;
s2, analyzing the video segment after the video received by the Internet of things is split, identifying the characteristic outline in the video frame, calculating the dynamic offset parameter according to the coordinate offset of the characteristic outline, calculating the dynamic and static characterization coefficient by combining the difference of the image parameters of each video frame in the video segment, and judging whether the video segment meets the preset dynamic and static difference standard;
step S3, according to whether the video segment meets the preset standard, the video segment is sent to the receiving end in a corresponding transmission mode, which comprises,
after video frames in the video section are coded frame by frame, the coded data are sent to a receiving end;
or dividing a plurality of judging areas in the video section, identifying a static area and a dynamic area according to the difference of chromaticity parameters in the judging areas of each video frame,
extracting video frames in a video segment at predetermined intervals, performing global encoding on the video frames which are not extracted, performing local encoding on images in a dynamic region in the extracted video frames, and transmitting encoded data to a receiving end;
and S4, after receiving the coded data, the receiving end screens the coded data of the global coding, screens out the static coded data corresponding to the static region in the video frame, combines the static coded data with the coded data of the local coding, and decodes the combined static coded data to obtain the video.
Further, identifying a feature profile in the video frame, calculating a dynamic offset parameter based on a coordinate offset of the feature profile includes,
the coordinate distance of the center point of the same characteristic outline in the first video frame and the last video frame is obtained, the dynamic offset parameter is calculated according to the formula (1),
in the formula (1), E represents a dynamic offset parameter, n represents the same number of feature contours in the first video frame and the last video frame, and S i represents the coordinate distance of the ith same feature contour center point.
Further, in the step S2, dynamic and static characterization coefficients are calculated according to the formula (2),
in the formula (2), D represents a dynamic and static characterization coefficient, E0 represents a preset dynamic offset parameter threshold, γ represents a dynamic offset parameter weight coefficient, L represents a luminance value, L0 represents a preset luminance value threshold, β represents a luminance value weight coefficient, a represents a chrominance value, A0 represents a preset chrominance value threshold, and α represents a chrominance value weight coefficient.
Further, the step S2 of determining whether the video segment meets the predetermined dynamic and static difference criteria includes,
if the dynamic and static characterization coefficient is smaller than a preset dynamic and static characterization coefficient comparison threshold, judging that the video segment accords with a preset dynamic and static difference standard;
and if the dynamic and static characterization coefficient is larger than or equal to a preset dynamic and static characterization coefficient comparison threshold, judging that the video segment does not accord with a preset dynamic and static difference standard.
Further, in the step S3, the video segment is sent to the receiving end in a corresponding transmission mode according to whether the video segment meets a predetermined standard,
if the video segment does not accord with the preset dynamic and static difference standard, the video frames in the video segment are coded frame by frame and then the coded data are sent to a receiving end;
if the video segment accords with the preset dynamic and static difference standard, dividing a plurality of judging areas in the video segment, calculating a chromaticity parameter difference value according to chromaticity parameters of each video frame in the judging areas to identify a static area and a dynamic area,
extracting video frames in the video segment at predetermined intervals, globally encoding the video frames which are not extracted, locally encoding the images in the dynamic region in the extracted video frames, and transmitting the encoded data to a receiving end.
Further, in the step S3, a chrominance parameter difference value of each video frame in the determination area is calculated according to the formula (3),
in the formula (3), M represents a difference value, M represents the number of video frames, aj represents a chromaticity value of a j-th video frame in the determination region, ae represents an average value of chromaticity values of each video frame in the determination region, lj represents a luminance value of a j-th video frame in the determination region, and Le represents an average value of luminance values of each video frame in the determination region.
Further, in the step S3, the process of identifying the static area and the dynamic area according to the chrominance parameter difference value of each video frame in the determination area includes,
comparing the difference value of the chromaticity parameter of each video frame in the judging area with a preset difference threshold value,
if the difference value of the chromaticity parameters is larger than a preset difference threshold value, the judging area is identified as a dynamic area,
and if the difference value of the chromaticity parameters is smaller than or equal to a preset difference threshold value, identifying the judging area as a static area.
Further, in the step S4, the process of combining the static encoded data with the encoded data of the partial encoding includes,
the static encoded data is duplicated and combined with the locally encoded data, respectively.
Further, in the step S3, the method further includes adjusting the encoding compression rate according to the chroma parameter difference value corresponding to each region during encoding compression,
and adjusting the coding compression rate to be inversely related to the difference value corresponding to each region.
Further, in the step S3, the areas of the determination regions are the same.
Compared with the prior art, the method and the device have the advantages that videos acquired by a plurality of terminals of the Internet of things are received through the Internet of things; calculating dynamic and static characterization coefficients of the video segments, and judging whether the video segments accord with preset dynamic and static difference standards or not; when the dynamic and static difference standard is not met, video frames in the video segment are coded frame by frame; when the dynamic and static difference standard is met, a plurality of judging areas are divided in the video segment, static areas and dynamic areas are identified, video frames in the video segment are extracted, the whole domain of the video frames which are not extracted is encoded, the extracted video frames are partially encoded, the receiving end receives encoded data, the static encoded data and the partially encoded data are combined and then decoded, and the video is obtained.
In particular, the invention characterizes the change condition in the video frame by calculating the dynamic and static characterization coefficients, considers the position change condition of the characteristic outline in the video frame by the dynamic and static characterization coefficients, considers the luminance value and the chrominance value to characterize the chrominance change condition in the video frame, characterizes the change in the video frame by the dynamic and static characterization coefficients, and improves the reliability of estimating the change condition in the video frame by calculating the dynamic and static characterization coefficients by adopting multiple reference factors.
In particular, the invention distinguishes the video segments with large variation difference from the video segments with small variation difference by judging whether the video segments meet the preset dynamic and static difference standard, and can reduce the size of the video segments received by the receiving end and lighten the calculation force of the receiving end by adopting different transmission modes subsequently.
In particular, the invention recognizes a static region and a dynamic region through the chroma parameter difference value in the judging region of each video frame, when the chroma parameter difference value in the judging region is large, the image in the region can be considered to have a change and a large change, when the chroma parameter difference value in the judging region is small, the image in the region can be considered to have a small change, the region with a large change is judged to be a dynamic region, and the region with a small change is judged to be a static region, thereby being beneficial to adopting different coding modes for the static region and the dynamic region subsequently, reducing the size of the video segment received by a receiving terminal and reducing the calculation power of the receiving terminal.
Particularly, the coding compression rate is adjusted according to the chroma parameter difference value corresponding to each region during coding compression, and the region with smaller chroma parameter difference value has less key information and can increase the coding compression rate, so that the size of a video segment received by a receiving end is reduced, and the computing power of the receiving end is lightened.
Drawings
Fig. 1 is a step diagram of a terminal management and control method of the internet of things according to an embodiment of the invention;
FIG. 2 is a dynamic and static difference standard logic decision diagram of an embodiment of the invention;
FIG. 3 is a diagram showing a video segment transmission mode selection according to an embodiment of the invention;
FIG. 4 is a diagram illustrating dynamic region and static region logic decisions according to an embodiment of the present invention.
Detailed Description
In order that the objects and advantages of the invention will become more apparent, the invention will be further described with reference to the following examples; it should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
Furthermore, it should be noted that, in the description of the present invention, unless explicitly specified and limited otherwise, the term "connected" should be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention can be understood by those skilled in the art according to the specific circumstances.
Referring to fig. 1-4, fig. 1 is a step diagram of a method for controlling an internet of things terminal according to an embodiment of the invention; FIG. 2 is a dynamic and static difference standard logic decision diagram of an embodiment of the invention; FIG. 3 is a diagram showing a video segment transmission mode selection according to an embodiment of the invention; fig. 4 is a logic decision diagram of a dynamic region and a static region according to an embodiment of the present invention, and a method for controlling an internet of things terminal according to the present invention includes:
step S1, receiving videos acquired by a plurality of terminals of the Internet of things through the Internet of things;
s2, analyzing the video segment after the video received by the Internet of things is split, identifying the characteristic outline in the video frame, calculating the dynamic offset parameter according to the coordinate offset of the characteristic outline, calculating the dynamic and static characterization coefficient by combining the difference of the image parameters of each video frame in the video segment, and judging whether the video segment meets the preset dynamic and static difference standard;
step S3, according to whether the video segment meets the preset standard, the video segment is sent to the receiving end in a corresponding transmission mode, which comprises,
after video frames in the video section are coded frame by frame, the coded data are sent to a receiving end;
or dividing a plurality of judging areas in the video section, identifying a static area and a dynamic area according to the difference of chromaticity parameters in the judging areas of each video frame,
extracting video frames in a video segment at predetermined intervals, performing global encoding on the video frames which are not extracted, performing local encoding on images in a dynamic region in the extracted video frames, and transmitting encoded data to a receiving end;
and S4, after receiving the coded data, the receiving end screens the coded data of the global coding, screens out the static coded data corresponding to the static region in the video frame, combines the static coded data with the coded data of the local coding, and decodes the combined static coded data to obtain the video.
Specifically, the specific modes of video coding and decoding are not limited, the video coding can select video coding modes such as H.264/AVC, H.265/HEVC and the like, and the video decoding can select a proper video decoder, which is the prior art and is not repeated.
Specifically, the specific mode of identifying the feature profile in the video frame is not limited, the feature profile can be realized through an image processing model, the image processing model capable of realizing the corresponding function can be trained in advance, and the corresponding function can be realized by importing the image processing model into a data processing component, which is the prior art and is not repeated.
For analysis and logic determination of video segments, a virtual processor can be set in the internet of things for processing, which is the prior art and will not be described in detail.
Specifically, the specific structure of the terminal of the internet of things is not limited, and the terminal of the internet of things can be an image collector with a data transmission function, which is not described again.
In particular, identifying feature contours in a video frame, calculating dynamic offset parameters based on coordinate offsets of the feature contours includes,
the coordinate distance of the center point of the same characteristic outline in the first video frame and the last video frame is obtained, the dynamic offset parameter is calculated according to the formula (1),
in the formula (1), E represents a dynamic offset parameter, n represents the same feature contour quantity in the first video frame and the last video frame, and Si represents the coordinate distance of the ith same feature contour center point.
In this embodiment, the method for obtaining the coordinate distance between the center points of the same feature contours in the first video frame and the second video frame is to establish a rectangular coordinate system with the center point of the video frame as the origin, and calculate the coordinate distance between the center points of the same feature contours according to the formula between the two coordinate points.
Specifically, in the step S2, the dynamic and static characterization coefficients are calculated according to the formula (2),
in the formula (2), D represents a dynamic and static characterization coefficient, E0 represents a preset dynamic offset parameter threshold, γ represents a dynamic offset parameter weight coefficient, L represents a luminance value, L0 represents a preset luminance value threshold, β represents a luminance value weight coefficient, a represents a chromaticity value, A0 represents a preset chromaticity value threshold, α represents a chromaticity value weight coefficient, and in this embodiment, γ=0.5, β=0.3, and α=0.2 are set.
In this embodiment, E0, L0, A0 are measured in advance for experiments, several video frames of the acquisition area of the terminal of the internet of things are acquired, an average dynamic characterization parameter Ep, an average luminance value Lp, an average chrominance value Ap, e0=ep×g, g, l0=lp×v, v, 1.2 < v < 1.4, a0=ap×b, b, and 1.2 < b < 1.4 are calculated.
In particular, the invention characterizes the change condition in the video frame by calculating the dynamic and static characterization coefficients, considers the dynamic and static characterization coefficients to characterize the position change condition of the characteristic contour in the video frame, considers the brightness value and the chromaticity value to characterize the chromaticity change condition in the video frame, the dynamic and static characterization coefficients, the brightness value and the chromaticity value can be changed in the characterization video frame, and the reliability of the change condition estimation in the video frame can be improved by calculating the dynamic and static characterization coefficients by adopting multiple reference factors.
Specifically, the step S2 of determining whether the video segment meets the predetermined dynamic and static difference standard includes,
if the dynamic and static characterization coefficient is smaller than a preset dynamic and static characterization coefficient comparison threshold, judging that the video segment accords with a preset dynamic and static difference standard;
and if the dynamic and static characterization coefficient is larger than or equal to a preset dynamic and static characterization coefficient comparison threshold, judging that the video segment does not accord with a preset dynamic and static difference standard.
Specifically, the invention distinguishes the video segments with large variation difference from the video segments with small variation difference by judging whether the video segments meet the preset dynamic and static difference standard, and can reduce the size of the video segments received by the receiving end and lighten the calculation force of the receiving end by adopting different transmission modes subsequently.
In this embodiment, the dynamic and static characterization coefficient value is calculated when the preset dynamic and static characterization coefficient comparison threshold is e=1.2ep, l=1.3lp, and a=1.3ap.
Specifically, in the step S3, the video segment is sent to the receiving end in a corresponding transmission mode according to whether the video segment meets a predetermined standard,
if the video segment does not accord with the preset dynamic and static difference standard, the video frames in the video segment are coded frame by frame and then the coded data are sent to a receiving end;
if the video segment accords with the preset dynamic and static difference standard, dividing a plurality of judging areas in the video segment, calculating a chromaticity parameter difference value according to chromaticity parameters of each video frame in the judging areas to identify a static area and a dynamic area,
extracting video frames in the video segment at predetermined intervals, globally encoding the video frames which are not extracted, locally encoding the images in the dynamic region in the extracted video frames, and transmitting the encoded data to a receiving end.
Specifically, in the step S3, the difference value of the chromaticity parameter in the judging area of each video frame is calculated according to the formula (3),
in the formula (3), M represents a difference value, M represents the number of video frames, aj represents a chromaticity value of a j-th video frame in the determination region, ae represents an average value of chromaticity values of each video frame in the determination region, lj represents a luminance value of a j-th video frame in the determination region, and Le represents an average value of luminance values of each video frame in the determination region.
Specifically, in the step S3, the process of identifying the static area and the dynamic area according to the difference value of the chrominance parameters in the judging area of each video frame comprises the steps of,
comparing the difference value of the chromaticity parameter of each video frame in the judging area with a preset difference threshold value,
if the difference value of the chromaticity parameters is larger than a preset difference threshold value, the judging area is identified as a dynamic area,
and if the difference value of the chromaticity parameters is smaller than or equal to a preset difference threshold value, identifying the judging area as a static area.
In the present embodiment, the preset difference threshold is set within the interval [0.1,0.15 ].
Specifically, the invention recognizes the static area and the dynamic area through the chromaticity parameter difference value in the judging area of each video frame, when the chromaticity parameter difference value in the judging area is large, the image in the area can be considered to have a change and a large change, when the chromaticity parameter difference value in the judging area is small, the image in the area can be considered to have a small change, the area with a large change is judged to be the dynamic area, and the area with a small change is judged to be the static area, thereby being beneficial to adopting different coding modes for the static area and the dynamic area subsequently, reducing the size of the video segment received by a receiving end and lightening the calculation power of the receiving end.
In particular, in the step S4, the process of combining the static encoded data with the locally encoded data includes,
copying the static coded data and respectively combining with the coded data of the local coding;
it will be appreciated that the encoded data for the localized encoding lacks encoded data for the static region, and thus, the combination of the static encoded data with the localized encoded data may be decoded to obtain a complete image.
Specifically, the step S3 further includes adjusting the encoding compression rate according to the chroma parameter difference value corresponding to each region during encoding compression,
and adjusting the coding compression rate to be inversely related to the difference value corresponding to each region.
In this embodiment, the number of the elements, alternatively,
if M is less than 0.05, adjusting the coding compression rate to be 50%;
if M is more than or equal to 0.05 and less than 0.15, adjusting the coding compression rate to 30%;
if M is not less than 0.15%, the coding compression rate is adjusted to 10%.
Specifically, the coding compression rate is adjusted according to the chroma parameter difference value corresponding to each region during coding compression, and the smaller the chroma parameter difference value is, the less key information exists in the region, so that the coding compression rate can be increased, the size of a video segment received by a receiving end is reduced, and the computing power of the receiving end is reduced.
Specifically, in the step S3, the areas of the determination regions are the same.
The terminal management and control method of the internet of things can be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will be within the scope of the present invention.

Claims (10)

1. The terminal management and control method of the Internet of things is characterized by comprising the following steps of:
step S1, receiving videos acquired by a plurality of terminals of the Internet of things through the Internet of things;
s2, analyzing the video segment after the video received by the Internet of things is split, identifying the characteristic outline in the video frame, calculating the dynamic offset parameter according to the coordinate offset of the characteristic outline, calculating the dynamic and static characterization coefficient by combining the difference of the image parameters of each video frame in the video segment, and judging whether the video segment meets the preset dynamic and static difference standard;
step S3, according to whether the video segment meets the preset standard, the video segment is sent to the receiving end in a corresponding transmission mode, which comprises,
after video frames in the video section are coded frame by frame, the coded data are sent to a receiving end;
or dividing a plurality of judging areas in the video section, identifying a static area and a dynamic area according to the difference of chromaticity parameters in the judging areas of each video frame,
extracting video frames in a video segment at predetermined intervals, performing global encoding on the video frames which are not extracted, performing local encoding on images in a dynamic region in the extracted video frames, and transmitting encoded data to a receiving end;
and S4, after receiving the coded data, the receiving end screens the coded data of the global coding, screens out the static coded data corresponding to the static region in the video frame, combines the static coded data with the coded data of the local coding, and decodes the combined static coded data to obtain the video.
2. The method according to claim 1, wherein in the step S2, the feature profile in the video frame is identified, and the process of calculating the dynamic offset parameter according to the coordinate offset amount of the feature profile includes,
the coordinate distance of the center point of the same characteristic outline in the first video frame and the last video frame is obtained, the dynamic offset parameter is calculated according to the formula (1),
in the formula (1), E represents a dynamic offset parameter, n represents the same feature contour quantity in the first video frame and the last video frame, and Si represents the coordinate distance of the ith same feature contour center point.
3. The method for controlling the terminal of the internet of things according to claim 2, wherein in the step S2, dynamic and static characterization coefficients are calculated according to the formula (2),
in the formula (2), D represents a dynamic and static characterization coefficient, E0 represents a preset dynamic offset parameter threshold, γ represents a dynamic offset parameter weight coefficient, L represents a luminance value, L0 represents a preset luminance value threshold, β represents a luminance value weight coefficient, a represents a chrominance value, A0 represents a preset chrominance value threshold, and α represents a chrominance value weight coefficient.
4. The method for managing and controlling terminals of the internet of things according to claim 3, wherein the step S2 of determining whether the video segment meets the predetermined dynamic and static difference criteria includes,
if the dynamic and static characterization coefficient is smaller than a preset dynamic and static characterization coefficient comparison threshold, judging that the video segment accords with a preset dynamic and static difference standard;
and if the dynamic and static characterization coefficient is larger than or equal to a preset dynamic and static characterization coefficient comparison threshold, judging that the video segment does not accord with a preset dynamic and static difference standard.
5. The method for managing and controlling terminals of Internet of things according to claim 4, wherein in step S3, the video segment is sent to the receiving end in a corresponding transmission mode according to whether the video segment meets a predetermined standard,
if the video segment does not accord with the preset dynamic and static difference standard, the video frames in the video segment are coded frame by frame and then the coded data are sent to a receiving end;
if the video segment accords with the preset dynamic and static difference standard, dividing a plurality of judging areas in the video segment, calculating a chromaticity parameter difference value according to chromaticity parameters of each video frame in the judging areas to identify a static area and a dynamic area,
extracting video frames in the video segment at predetermined intervals, globally encoding the video frames which are not extracted, locally encoding the images in the dynamic region in the extracted video frames, and transmitting the encoded data to a receiving end.
6. The method for controlling the terminal of the internet of things according to claim 1, wherein in the step S3, the difference value of the chromaticity parameter of each video frame in the determination area is calculated according to the formula (3),
in the formula (3), M represents a difference value, M represents the number of video frames, aj represents a chromaticity value of a j-th video frame in the determination region, ae represents an average value of chromaticity values of each video frame in the determination region, lj represents a luminance value of a j-th video frame in the determination region, and Le represents an average value of luminance values of each video frame in the determination region.
7. The method according to claim 6, wherein in the step S3, the process of identifying the static area and the dynamic area according to the difference value of the chrominance parameters of each video frame in the determination area includes,
comparing the difference value of the chromaticity parameter of each video frame in the judging area with a preset difference threshold value,
if the difference value of the chromaticity parameters is larger than a preset difference threshold value, the judging area is identified as a dynamic area,
and if the difference value of the chromaticity parameters is smaller than or equal to a preset difference threshold value, identifying the judging area as a static area.
8. The method according to claim 1, wherein the combining of the static encoded data and the locally encoded data in step S4 includes,
the static encoded data is duplicated and combined with the locally encoded data, respectively.
9. The method for managing and controlling the terminal of the internet of things according to claim 1, wherein the step S3 further comprises adjusting the encoding compression rate according to the chromaticity parameter difference value corresponding to each region during encoding compression,
and adjusting the coding compression rate to be inversely related to the difference value corresponding to each region.
10. The method for managing and controlling the terminal of the internet of things according to claim 1, wherein in the step S3, the areas of the determination areas are the same.
CN202311441709.0A 2023-11-01 2023-11-01 Terminal management and control method for Internet of things Pending CN117651148A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311441709.0A CN117651148A (en) 2023-11-01 2023-11-01 Terminal management and control method for Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311441709.0A CN117651148A (en) 2023-11-01 2023-11-01 Terminal management and control method for Internet of things

Publications (1)

Publication Number Publication Date
CN117651148A true CN117651148A (en) 2024-03-05

Family

ID=90048548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311441709.0A Pending CN117651148A (en) 2023-11-01 2023-11-01 Terminal management and control method for Internet of things

Country Status (1)

Country Link
CN (1) CN117651148A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023001107A1 (en) * 2021-07-19 2023-01-26 索尼集团公司 Photographic image processing method and device
CN116260928A (en) * 2023-05-15 2023-06-13 湖南马栏山视频先进技术研究院有限公司 Visual optimization method based on intelligent frame insertion
CN116567410A (en) * 2023-07-10 2023-08-08 芯知科技(江苏)有限公司 Auxiliary photographing method and system based on scene recognition
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence
CN116847126A (en) * 2023-07-20 2023-10-03 北京富通亚讯网络信息技术有限公司 Video decoding data transmission method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023001107A1 (en) * 2021-07-19 2023-01-26 索尼集团公司 Photographic image processing method and device
CN116260928A (en) * 2023-05-15 2023-06-13 湖南马栏山视频先进技术研究院有限公司 Visual optimization method based on intelligent frame insertion
CN116567410A (en) * 2023-07-10 2023-08-08 芯知科技(江苏)有限公司 Auxiliary photographing method and system based on scene recognition
CN116847126A (en) * 2023-07-20 2023-10-03 北京富通亚讯网络信息技术有限公司 Video decoding data transmission method and system
CN116708789A (en) * 2023-08-04 2023-09-05 湖南马栏山视频先进技术研究院有限公司 Video analysis coding system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US8295350B2 (en) Image coding apparatus with segment classification and segmentation-type motion prediction circuit
US7295612B2 (en) Determining the number of unidirectional and bidirectional motion compensated frames to be encoded for a video sequence and detecting scene cuts in the video sequence
JP4271027B2 (en) Method and system for detecting comics in a video data stream
US5654759A (en) Methods and apparatus for reducing blockiness in decoded video
US20100322300A1 (en) Method and apparatus for adaptive feature of interest color model parameters estimation
US20050180502A1 (en) Rate control for video coder employing adaptive linear regression bits modeling
WO2006004605B1 (en) Multi-pass video encoding
US8224076B2 (en) Image processing method and image processing apparatus
CN101507277A (en) Image encoding/decoding method and apparatus
JP2002077723A (en) Moving image processor and moving image processing method and recording medium
WO2010032334A1 (en) Quality index value calculation method, information processing device, dynamic distribution system, and quality index value calculation program
US20040258151A1 (en) Method and apparatus for decoding compressed and encoded digital images
WO2007093942A2 (en) Reduction of compression artefacts in displayed images, analysis of encoding parameters
CN112672149B (en) Video processing method and device, storage medium and server
US5926572A (en) Method of determining coding type and coding mode for object shape coding
US8630500B2 (en) Method for the encoding by segmentation of a picture
CN116708789A (en) Video analysis coding system based on artificial intelligence
US20110051010A1 (en) Encoding Video Using Scene Change Detection
US5204740A (en) Image signal decoding apparatus
JP2001251627A (en) Coder, coding method and recording medium recorded with program
CN117651148A (en) Terminal management and control method for Internet of things
KR100561394B1 (en) Apparatus for deciding and managing coding mode in the image coding and method thereof
US7133448B2 (en) Method and apparatus for rate control in moving picture video compression
Yu et al. A perceptual bit allocation scheme for H. 264
CN113365080B (en) Encoding and decoding method, device and storage medium for string coding technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination