CN115529460A - Method for realizing dynamic mosaic based on content coding - Google Patents

Method for realizing dynamic mosaic based on content coding Download PDF

Info

Publication number
CN115529460A
CN115529460A CN202111276922.1A CN202111276922A CN115529460A CN 115529460 A CN115529460 A CN 115529460A CN 202111276922 A CN202111276922 A CN 202111276922A CN 115529460 A CN115529460 A CN 115529460A
Authority
CN
China
Prior art keywords
video
coding
target
content
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111276922.1A
Other languages
Chinese (zh)
Inventor
段涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xiaoyou Entertainment Technology Co ltd
Original Assignee
Shenzhen Xiaoyou Entertainment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xiaoyou Entertainment Technology Co ltd filed Critical Shenzhen Xiaoyou Entertainment Technology Co ltd
Priority to CN202111276922.1A priority Critical patent/CN115529460A/en
Publication of CN115529460A publication Critical patent/CN115529460A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/188Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application discloses a method for realizing dynamic mosaic based on content coding, which comprises the following steps: (1) Screening out a target coding unit to be coded in the video content image according to a target reference block for making the existing video content image; (2) Performing video content encoding based on a target encoding unit and a target reference block; (3) According to video content coding, a video characteristic training database is created through a training atlas; (4) capturing target video content image data; (5) Monitoring a characteristic region in target video content image data, and performing image preprocessing; (6) And matching the preprocessed image with the image in the feature training database so as to identify the feature region. The method for realizing the dynamic mosaic based on the content coding is provided.

Description

Method for realizing dynamic mosaic based on content coding
Technical Field
The application relates to the technical field of dynamic mosaic processing, in particular to a method for realizing dynamic mosaic based on content coding.
Background
Mosaic refers to a widely used image (video) processing means, which degrades the detail of color gradation in a specific area of an image and causes the disorder of color blocks, and since the blurring is formed by small lattices, the picture is called mosaic visually. The purpose is usually to make it unrecognizable, and some techniques exist to add dynamic mosaics to videos at present, and most of them are directed to computer terminals.
The existing processing method of the dynamic mosaic generally needs manual identification coding, for some dynamic videos, the manual coding efficiency is slow, the dynamic mosaic cannot be realized according to content coding, and a method for realizing the dynamic mosaic according to the content coding is lacked. Therefore, a method for implementing dynamic mosaic based on content coding is proposed to solve the above problems.
Disclosure of Invention
The embodiment provides a method for realizing a dynamic mosaic based on content coding, which is used for solving the problems that the existing processing method of the dynamic mosaic generally needs manual identification coding, the manual coding efficiency is low for some dynamic videos, the dynamic mosaic cannot be realized according to the content coding, and a method for realizing the dynamic mosaic based on the content coding is lacked.
According to an aspect of the present application, there is provided a method for implementing a dynamic mosaic based on content coding, the method comprising the steps of;
(1) Screening out a target coding unit to be coded in the video content image according to a target reference block for manufacturing the existing video content image;
(2) Performing video content encoding based on the target encoding unit and the target reference block;
(3) According to video content coding, a video characteristic training database is created through a training atlas;
(4) Capturing target video content image data;
(5) Monitoring a characteristic region in target video content image data, and performing image preprocessing;
(6) Matching the preprocessed image with the image in the feature training database so as to identify the feature region;
(7) Carrying out image conversion on the identified characteristic region;
(8) Generating a video file according to the converted video image data;
(9) And prereviewing the generated video file, and storing the video file after the video file is examined and qualified.
Further, the video information of the target coding unit is acquired in the step (1).
Further, the target coding unit is matched with the target reference block in the step (2), and the matching area is displayed.
Further, the training atlas in step (3) includes video information and audio information.
Further, in the step (4), the images in the feature training database are classified, and fast matching is performed according to the image data of the target video content and the classified images in the feature training database.
Further, in the step (7), the feature region is subjected to mosaic processing by a graphics processor.
Further, the video feature training database in the step (3) is an XML file set.
Further, the video content graphic data in step (3) further includes obtaining audio information, and providing a basis for encoding the video content through the audio information.
Further, the preprocessed images in the target video are stored in the video feature training database in the step (6) to enrich the video feature training database.
Further, the generated video file is compared with the feature training database in the step (9), so that the coding result of the dynamically coded video is judged.
According to the embodiment of the application, the method for realizing the dynamic mosaic based on the content coding solves the problems that the conventional processing method of the dynamic mosaic generally needs manual identification coding, the manual coding efficiency is low for some dynamic videos, the dynamic mosaic cannot be realized according to the content coding, and a method for realizing the dynamic mosaic based on the content coding is lacked.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flow chart of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the accompanying drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as the case may be.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The method for implementing dynamic mosaic based on content coding in this embodiment may be used for video dynamic mosaic processing, for example, the following method for implementing dynamic mosaic based on content coding is provided in this embodiment, and the method for implementing dynamic mosaic based on content coding in this embodiment may be applied to the following privacy processing.
A privacy processing device for a medical operation teaching system comprises a decoding module, a face recognition module, a coding module and a coding module.
The acquisition module of the medical operation teaching system is used for acquiring video streams in the operation process, wherein the video streams are in formats of RTMP, RTSP and the like, and the video streams are input to the decoding module of the privacy processing device. The decoding module can adopt a DS-6601HFH/L decoder of Haokwegian, an HD-EX1000FS-M decoder of Nanjing Hengxinlang electronic technology limited company and the like. The decoding module of the privacy processing device is used for decoding the video stream to obtain one frame of audio and video data, including video data in YUV, RGB and other formats and audio data in AAC and other formats, and sending the data to the face recognition module for processing. The face recognition module has an intelligent deep learning function and can improve the recognition processing capacity of the side face.
The face recognition module recognizes the face information, carries out face recognition processing on the video data, tracks and records related face information data in real time and sends the data to the coding module for processing.
The coding module carries out mosaic processing on the face data identified by the face identification module and sends the processed data to the coding module for processing. The encoding module can adopt a DS-6102HC encoder of Haitangwei, a DH-NVS0104HV encoder of Dahua technologies GmbH of Zhejiang, and the like. And the coding module recodes the video data after mosaic processing, sends the coded data to an output module of the medical operation teaching system for processing, and displays the data through a corresponding display terminal.
The privacy processing method of the privacy processing device of the medical operation teaching system comprises the following steps:
step one, an acquisition module of the medical operation teaching system acquires an operation video stream, wherein the video format is RTMP, RTSP and the like. The acquisition module sends the video stream to the decoding module, and the decoding module decodes the video stream into frame data in YUV, RGB and other formats.
Sending video frame data to a face recognition module, recognizing face information of each frame of image by the face recognition module, and tracking and recording related face data in real time;
the second step is specifically as follows:
201. introducing a face picture of a medical worker into a face recognition module, generating three-dimensional face features, and generating a face feature library according to the three-dimensional face features;
202. leading the shelter photo (such as a mask, an oxygen mask and the like) into a face recognition module to generate three-dimensional shelter characteristics, and generating a shelter characteristic library according to the three-dimensional shelter characteristics;
203. carrying out face capture on image data input into the face recognition module by the acquisition module, wherein the image data comprises data such as contours, five sense organs and the like, and generating three-dimensional face features; each frame of image data contains a plurality of faces of medical personnel, patients and the like, so the number of the faces captured is the number of the faces;
204. inquiring all the face features in each frame of image data in a face feature library, if the generated face feature part is found in the face feature library, indicating that the face is the face of the medical personnel, and filtering without making face marks; if the generated face feature part is not found in the face feature library, indicating that the face of the patient is possible, and performing occlusion feature value comparison; if the shelter features are found in the shelter feature library, the patient with a mask or an oxygen mask is shown, final face marks are made, and face data are recorded and stored; if the occlusion feature is not found in the occlusion feature library, it is not a patient and no face marker is made.
Thirdly, the face recognition module sends the face data in the video frame to the fuzzy processing module 3 in real time, and the fuzzy processing module carries out mosaic fuzzy processing on the face data in the video frame;
the third step is specifically as follows:
301. finding a data area needing coding from the frame video data through the face position marked by the face recognition module;
302. obtaining pixel points (ARGB format) in a data area needing coding and storing the pixel points in an internal memory;
303. traversing pixel points in a data area needing coding from a first pixel point;
304. judging whether the position of the pixel point is integral multiple of the width of the mosaic, if not, entering step 305; if the mosaic is integral multiple, go to step 306;
305. replacing the pixel point with the information of the pixel point of integral multiple, and entering step 306;
306. and judging whether the traversal is finished, if not, entering the step 303, and if so, finishing.
And step four, the processed video frame is sent to an encoding module by the fuzzy processing module, and the processed video frame is encoded and restored into video streams in the formats of RTSP, RTMP and the like by the encoding module.
And fifthly, sending the processed video stream to an output module of a medical operation teaching system for processing, and sending the video stream with the privacy processing to a corresponding display terminal of a classroom for displaying.
Of course, the embodiment can also be used for video dynamic mosaic processing. Here, details are not repeated, and a method for implementing a dynamic mosaic based on content coding according to an embodiment of the present application is described below.
Example one
Referring to fig. 1, a method for implementing dynamic mosaic based on content coding includes the following steps;
(1) Screening out a target coding unit to be coded in the video content image according to a target reference block for manufacturing the existing video content image;
(2) Performing video content encoding based on the target encoding unit and the target reference block;
(3) According to video content coding, a video characteristic training database is created through a training atlas;
(4) Capturing target video content image data;
(5) Monitoring a characteristic region in target video content image data, and performing image preprocessing;
(6) Matching the preprocessed image with the image in the feature training database so as to identify the feature region;
(7) Performing image conversion on the identified characteristic region;
(8) Generating a video file according to the converted video image data;
(9) And pre-examining the generated video file, and storing the video file after the video file is qualified.
Further, the video information of the target coding unit is acquired in the step (1).
Further, in the step (2), the target coding unit is matched with the target reference block, and the matching area is displayed.
Further, the training atlas in step (3) includes video information and audio information.
Further, in the step (4), the images in the feature training database are classified, and the images in the feature training database are quickly matched according to the target video content image data.
Further, in the step (7), the feature region is subjected to mosaic processing by a graphics processor.
Further, the video feature training database in the step (3) is an XML file set.
Further, the video content graphic data in step (3) further includes obtaining audio information, and providing a basis for encoding the video content through the audio information.
Further, the preprocessed images in the target video are stored in the video feature training database in the step (6) to enrich the video feature training database.
Further, the generated video file is compared with the feature training database in the step (9), so that the coding result of the dynamically coded video is judged.
The method can solve the problems that the conventional dynamic mosaic processing method usually needs manual identification coding, has low manual coding efficiency for some dynamic videos, cannot realize the dynamic mosaic according to content coding, and is lack of a method for realizing the dynamic mosaic based on the content coding.
Example two
Referring to fig. 1, a method for implementing dynamic mosaic based on content coding includes the following steps;
(1) Screening out a target coding unit to be coded in the video content image according to a target reference block for manufacturing the existing video content image;
(2) Performing video content encoding based on a target encoding unit and a target reference block;
(3) According to video content coding, a video characteristic training database is created through a training atlas;
(4) Capturing target video content image data;
(5) Monitoring a characteristic region in target video content image data, and performing image preprocessing;
(6) Matching the preprocessed image with the image in the feature training database so as to identify the feature region;
(7) Carrying out image conversion on the identified characteristic region;
(8) Generating a video file according to the converted video image data;
(9) And prereviewing the generated video file, and storing the video file after the video file is examined and qualified.
Further, the video information of the target coding unit is acquired in the step (1).
Further, in the step (2), the target coding unit is matched with the target reference block, and the matching area is displayed.
Further, the training atlas in step (3) includes video information and audio information.
Further, in the step (4), the images in the feature training database are classified, and fast matching is performed according to the image data of the target video content and the classified images in the feature training database.
Further, in the step (7), the feature region is subjected to mosaic processing by a graphics processor.
Further, the video feature training database in the step (3) is an XML file set.
Further, the video content graphic data in step (3) further includes obtaining audio information, and providing a basis for encoding the video content through the audio information.
Further, the preprocessed images in the target video are stored in the video feature training database in the step (6) to enrich the video feature training database.
Further, the generated video file is compared with the feature training database in the step (9), so that the coding result of the dynamically coded video is judged.
The method can solve the problems that the conventional processing method of the dynamic mosaic generally needs manual identification coding, the manual coding efficiency is low for some dynamic videos, the dynamic mosaic cannot be realized according to content coding, and a method for realizing the dynamic mosaic based on the content coding is lacked.
The application has the advantages that:
the method for realizing the dynamic mosaic based on the content coding can solve the problems that the conventional method for processing the dynamic mosaic generally needs manual identification and coding, the manual coding efficiency is low for some dynamic videos, the dynamic mosaic cannot be realized according to the content coding, and a method for realizing the dynamic mosaic based on the content coding is lacked.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method for realizing dynamic mosaic based on content coding is characterized in that: the method comprises the following steps:
(1) Screening out a target coding unit to be coded in the video content image according to a target reference block for manufacturing the existing video content image;
(2) Performing video content encoding based on the target encoding unit and the target reference block;
(3) According to video content coding, a video characteristic training database is created through a training atlas;
(4) Capturing target video content image data;
(5) Monitoring a characteristic region in target video content image data, and performing image preprocessing;
(6) Matching the preprocessed image with the image in the feature training database so as to identify the feature region;
(7) Carrying out image conversion on the identified characteristic region;
(8) Generating a video file according to the converted video image data;
(9) And pre-examining the generated video file, and storing the video file after the video file is qualified.
2. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: and (2) acquiring video information of the target coding unit in the step (1).
3. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: and (3) matching the target coding unit with the target reference block in the step (2), and displaying the matching area.
4. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: the training atlas in step (3) includes video information and audio information.
5. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: and (4) classifying the images in the feature training database, and performing quick matching on the images in the feature training database after classification according to the target video content image data.
6. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: in the step (7), the feature region is subjected to mosaic processing by a graphics processor.
7. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: the video feature training database in the step (3) is an XML file set.
8. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: and (4) in the step (3), the video content graphic data also comprises the step of acquiring audio information, and providing a basis for encoding the video content through the audio information.
9. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: and (6) storing the preprocessed images in the target video in a video feature training database to enrich the video feature training database.
10. The method for implementing dynamic mosaic based on content coding as claimed in claim 1, wherein: and (9) comparing the generated video file with the characteristic training database so as to judge the coding result of the dynamically coded video.
CN202111276922.1A 2021-10-29 2021-10-29 Method for realizing dynamic mosaic based on content coding Pending CN115529460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111276922.1A CN115529460A (en) 2021-10-29 2021-10-29 Method for realizing dynamic mosaic based on content coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111276922.1A CN115529460A (en) 2021-10-29 2021-10-29 Method for realizing dynamic mosaic based on content coding

Publications (1)

Publication Number Publication Date
CN115529460A true CN115529460A (en) 2022-12-27

Family

ID=84694977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111276922.1A Pending CN115529460A (en) 2021-10-29 2021-10-29 Method for realizing dynamic mosaic based on content coding

Country Status (1)

Country Link
CN (1) CN115529460A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103609A (en) * 2009-12-21 2011-06-22 北京中星微电子有限公司 Information retrieval method and system
CN103049755A (en) * 2012-12-28 2013-04-17 合一网络技术(北京)有限公司 Method and device for realizing dynamic video mosaic
CN106028114A (en) * 2016-05-19 2016-10-12 浙江大华技术股份有限公司 Witness protection method and device for collecting audio/video evidence in real time
CN107122439A (en) * 2017-04-21 2017-09-01 图麟信息科技(深圳)有限公司 A kind of video segment querying method and device
CN109063506A (en) * 2018-07-09 2018-12-21 江苏达实久信数字医疗科技有限公司 Privacy processing method for medical operating teaching system
CN111291599A (en) * 2018-12-07 2020-06-16 杭州海康威视数字技术股份有限公司 Image processing method and device
CN111669595A (en) * 2020-05-26 2020-09-15 腾讯科技(深圳)有限公司 Screen content coding method, device, equipment and medium
CN112328830A (en) * 2019-08-05 2021-02-05 Tcl集团股份有限公司 Information positioning method based on deep learning and related equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103609A (en) * 2009-12-21 2011-06-22 北京中星微电子有限公司 Information retrieval method and system
CN103049755A (en) * 2012-12-28 2013-04-17 合一网络技术(北京)有限公司 Method and device for realizing dynamic video mosaic
CN106028114A (en) * 2016-05-19 2016-10-12 浙江大华技术股份有限公司 Witness protection method and device for collecting audio/video evidence in real time
CN107122439A (en) * 2017-04-21 2017-09-01 图麟信息科技(深圳)有限公司 A kind of video segment querying method and device
CN109063506A (en) * 2018-07-09 2018-12-21 江苏达实久信数字医疗科技有限公司 Privacy processing method for medical operating teaching system
CN111291599A (en) * 2018-12-07 2020-06-16 杭州海康威视数字技术股份有限公司 Image processing method and device
CN112328830A (en) * 2019-08-05 2021-02-05 Tcl集团股份有限公司 Information positioning method based on deep learning and related equipment
CN111669595A (en) * 2020-05-26 2020-09-15 腾讯科技(深圳)有限公司 Screen content coding method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN107004271B (en) Display method, display apparatus, electronic device, computer program product, and storage medium
US6101274A (en) Method and apparatus for detecting and interpreting textual captions in digital video signals
CN110119757A (en) Model training method, video category detection method, device, electronic equipment and computer-readable medium
CN109063506B (en) Privacy processing method for medical operation teaching system
CN109740572B (en) Human face living body detection method based on local color texture features
CN106791854B (en) Image coding, coding/decoding method and device
TW201201107A (en) Barcode image recognition system and associated method for hand-held device
CN106550244A (en) The picture quality enhancement method and device of video image
CN110096945B (en) Indoor monitoring video key frame real-time extraction method based on machine learning
KR20190079047A (en) A supporting system and method that assist partial inspections of suspicious objects in cctv video streams by using multi-level object recognition technology to reduce workload of human-eye based inspectors
WO2023005740A1 (en) Image encoding, decoding, reconstruction, and analysis methods, system, and electronic device
CN107943811A (en) The dissemination method and device of content
CN114596259A (en) Method, device, equipment and storage medium for determining reference-free video quality
CN107483916A (en) The control method of audio frequency and video archival quality detecting system
CN112801037A (en) Face tampering detection method based on continuous inter-frame difference
CN115314713A (en) Method, system and device for extracting target segment in real time based on accelerated video
US20130208984A1 (en) Content scene determination device
CN114677644A (en) Student seating distribution identification method and system based on classroom monitoring video
CN113963162A (en) Helmet wearing identification method and device, computer equipment and storage medium
TW201738844A (en) Device and method for monitoring, method for counting people at a location
CN115529460A (en) Method for realizing dynamic mosaic based on content coding
CN112866786A (en) Video data processing method and device, terminal equipment and storage medium
CN105027552B (en) Image processing equipment and image processing method
CN110379130A (en) A kind of Medical nursing shatter-resistant adjustable voltage system based on multi-path high-definition SDI video
CN113011300A (en) Method, system and equipment for AI visual identification of violation behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination