CN112965693A - Video analysis software design method based on edge calculation - Google Patents
Video analysis software design method based on edge calculation Download PDFInfo
- Publication number
- CN112965693A CN112965693A CN202110191040.9A CN202110191040A CN112965693A CN 112965693 A CN112965693 A CN 112965693A CN 202110191040 A CN202110191040 A CN 202110191040A CN 112965693 A CN112965693 A CN 112965693A
- Authority
- CN
- China
- Prior art keywords
- video
- software
- character
- edge
- action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000004364 calculation method Methods 0.000 title claims abstract description 21
- 230000009471 action Effects 0.000 claims abstract description 52
- 238000004458 analytical method Methods 0.000 claims abstract description 33
- 230000006399 behavior Effects 0.000 claims abstract description 8
- 230000006835 compression Effects 0.000 claims description 4
- 230000008676 import Effects 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 claims description 2
- 238000013461 design Methods 0.000 abstract description 12
- 230000003993 interaction Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012550 audit Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/20—Software design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234309—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a video analysis software design method based on edge calculation, which comprises the following steps: s1, integrating a plurality of sub-software units into a software box; s2, establishing an edge identity database of faces, fingerprints, voice and contact information; s3, capturing and shooting the limb actions and action track video streams of the person after acquiring the real-name information of the face, the fingerprint or the voice of the person; and S4, importing the real-time video stream into a software box, analyzing the character identity information matching, the character face, the limb action, the character action track, the character positioning and the time video stream, judging whether the character action, the limb action or the action track meets the regulations, and exporting the analysis result. The video design method is based on edge calculation, integrates a plurality of sub software units, performs video analysis on an imported video, stores video streams and character identity data which do not conform to specified behaviors, actions or action tracks into an edge server, and generates error information in the edge server.
Description
Technical Field
The invention relates to the technical field of software design, in particular to a video analysis software design method based on edge calculation.
Background
Edge computing means that an open platform integrating network, computing, storage and application core capabilities is adopted on one side close to an object or a data source to provide nearest-end services nearby. The application program is initiated at the edge side, so that a faster network service response is generated, and the basic requirements of the industry in the aspects of real-time business, application intelligence, safety, privacy protection and the like are met. The edge computation is between the physical entity and the industrial connection, or on top of the physical entity. And the cloud computing still can access the historical data of the edge computing.
At present, software design and development require a software developer to carry out targeted product function design, system architecture design and data structure design according to the business requirements of users, then the developer carries out program coding development according to the design, and the software system with fixed functions (developed according to the requirements of the users) is delivered to the users after the development is finished. Video analysis can establish a mapping relationship between an image and a description of the image, and understand the content in a picture by finding a meaningful structure and pattern from video data and processing and analyzing the image.
The existing video analysis design method cannot simultaneously carry out technical detection and behavior analysis of targets according to various picture relations in videos, and designed software cannot supervise and allocate various video processing software, so that the accuracy rate is low. Furthermore, human-computer interaction is extremely inconvenient.
In view of the above, there is a need to improve a video analysis design method in the prior art to solve the problems of unreasonable allocation of multiple processing software, inconvenient human-computer interaction and low accuracy of the conventional video analysis design method.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a video analysis software design method based on edge calculation.
In order to achieve the purpose, the invention adopts the technical scheme that: a video analysis software design method based on edge calculation is characterized in that the video software comprises a plurality of sub-software units and comprises the following steps:
s1, software integration: integrating a plurality of sub-software units into a software box;
s2, establishing an edge database: establishing an edge identity database of a face, a fingerprint, voice and a contact way, storing an exclusive serial number of a figure, and realizing real-name authentication;
s3, video acquisition: capturing real-time video streams for shooting limb actions and action tracks of the person after acquiring real-name information of the face, fingerprints or voice of the person;
s4, video edge analysis: and (3) importing the real-time video stream into a software box, analyzing the video stream of character identity information matching, character face, limb action, character action track, character positioning and time by the various sub-software units, judging whether the character action, the limb action or the action track meets the regulations or not, and exporting the analysis result.
In a preferred embodiment of the present invention, in S4, the method further includes the following steps: edge storage: deleting the video stream which conforms to the specified behavior, action or action track, storing the video stream which does not conform to the specified behavior, action or action track and the character identification data in the edge server, and generating the error information in the edge server.
In a preferred embodiment of the present invention, the video analysis result is further retrieved by the client: the non-worker client comprises a self-checking web window and a complaint web window; the staff client includes retrieve and remove files.
In a preferred embodiment of the present invention, the edge server sends information to the contact mode of the person and the object after receiving the video data, and reminds the person.
In a preferred embodiment of the present invention, the video stream is compressed to a certain size and a uniform video stream format is implemented before the video stream and the character identification data are stored in the edge server.
In a preferred embodiment of the present invention, the sub-software units at least include a person face recognition software unit, a voice recognition software unit, a fingerprint recognition software unit, a person positioning software unit, a video analysis software unit, and a cache compression software unit.
In a preferred embodiment of the present invention, the self-inspection web window includes real-name data of people and delinquent information, and the complaint web window includes real-time images of decompressed videos and video analysis results.
In a preferred embodiment of the present invention, the software box defines a video stream import interface and a video stream interface for a plurality of the sub-software units, and the plurality of the sub-software units are in a parallel or serial structure.
In a preferred embodiment of the present invention, the software box further comprises a container orchestrator, and the operations that can be performed include resource supervision and task scheduling on the video stream data.
In a preferred embodiment of the present invention, the character positioning software unit is based on character target calibration and scene extraction in the video stream.
In a preferred embodiment of the present invention, the human face recognition software unit performs screening identification according to human face similarity.
The invention solves the defects in the background technology, and has the following beneficial effects:
(1) the invention provides a video analysis software design method, which is based on edge calculation, integrates a plurality of sub-software units, performs real-name detection on introduced video faces, fingerprints and voices, performs video analysis on human limb actions and action tracks, stores video streams and human identity data which do not accord with specified actions, actions or action tracks into an edge server, and generates error information in the edge server.
(2) The video analysis software can interact the video analysis result with the user in real time by designing a visual web interface, and divides the client into a non-worker client and a worker client, so that a person can self-check the form of the over-distortion or a worker can eliminate the video stream.
(3) In order to realize the storage of the most valuable video data in the minimum space, the software selectively stores the video data based on the result of the video analysis, namely, the storage of invalid video data is reduced, meanwhile, the valid video data is maximally stored, and the utilization rate of the storage space of the edge server is improved.
(4) The sub-software units at least comprise a figure face identification software unit, a voice identification software unit, a fingerprint identification software unit, a figure positioning software unit, a video analysis software unit and a cache compression software unit. A complete video analysis system can be formed through coordination and cooperation among all the functional sub-units. The function modular design of the software can improve the expansibility of the system, so that the software design idea is clearer and the expansion and maintenance of the system functions are convenient.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts;
FIG. 1 is a flow chart of a preferred embodiment of the present invention;
fig. 2 is a schematic diagram of a human-computer interaction client of the video analysis software according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
In the description of the present application, it is to be understood that the terms "center," "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the present application and for simplicity in description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner, and are not to be considered limiting of the scope of the present application. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the invention, the meaning of "a plurality" is two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art through specific situations.
As shown in FIG. 1, a video analysis software design method based on edge calculation is based on edge calculation, and integrates a plurality of sub-software units, performs real-name detection on imported video faces, fingerprints and voices, performs video analysis on human limb actions and action tracks, stores video streams and human identity data which do not accord with specified actions, actions or action tracks into an edge server, and generates error information in the edge server.
The video software comprises a plurality of sub-software units, and specifically comprises the following steps:
s1, software integration: integrating a plurality of sub-software units into a software box;
s2, establishing an edge database: establishing an edge identity database of a face, a fingerprint, voice and a contact way, storing an exclusive serial number of a figure, and realizing real-name authentication;
s3, video acquisition: capturing real-time video streams for shooting limb actions and action tracks of the person after acquiring real-name information of the face, fingerprints or voice of the person;
s4, video edge analysis: and (3) importing the real-time video stream into a software box, analyzing the video stream of character identity information matching, character face, limb action, character action track, character positioning and time by the various sub-software units, judging whether the character action, the limb action or the action track meets the regulations or not, and exporting the analysis result.
S5, edge storage: deleting the video stream which conforms to the specified behavior, action or action track, storing the video stream which does not conform to the specified behavior, action or action track and the character identification data in the edge server, and generating the error information in the edge server.
Fig. 2 is a schematic diagram of a human-computer interaction client of the video analysis software according to the present invention. The video analysis software can interact the video analysis result with the user in real time by designing a visual web interface, present the analysis result of the video stream through the visual web of the client, and divide the client into a non-worker client and a worker client. The non-worker client comprises a self-checking web window and a complaint web window, wherein the self-checking web window comprises real-name data of people and mistake information, and the complaint web window comprises a real-time picture of a decompressed video and a video analysis result. The staff client comprises a retrieval and elimination file, and the video stream file can be eliminated after the non-staff member audits the file after the complaint is made by the web window.
After receiving the video data, the edge server sends information to a person contact mode and reminds persons.
The sub-software units at least comprise a figure face recognition software unit, a voice recognition software unit, a fingerprint recognition software unit, a figure positioning software unit, a video analysis software unit and a cache compression software unit. The software is divided into several functional subunits. A complete video analysis system can be formed through coordination and cooperation among all the functional sub-units. The function modular design of the software can improve the expansibility of the system, so that the software design idea is clearer and the expansion and maintenance of the system functions are convenient.
In order to realize the storage of the most valuable video data in the minimum space, the software selectively stores the video data based on the result of the video analysis, namely, the storage of invalid video data is reduced, meanwhile, the valid video data is maximally stored, and the utilization rate of the storage space of the edge server is improved. Before storing the video stream and the character identification data in the edge server, the video stream is compressed to a certain size, and a uniform video stream format is implemented.
The software box is defined with a video stream import interface and a video stream interface which are used for a plurality of sub software units, and the plurality of sub software units are in a parallel or serial structure.
The invention relates to a single camera for collecting and selecting video stream: the cameras are convenient to deploy, the analysis of the single-path video stream is simpler, and the system does not need to consider the cooperation among the multiple cameras during video analysis; the network bandwidth consumption is reduced, the network bandwidth is consumed when the network camera imports the video stream, and compared with a plurality of cameras, a single camera has the advantage of saving the network bandwidth.
Also included in the software box is a container orchestrator capable of performing operations including resource supervision and task scheduling on the video stream data.
The character positioning software unit is used for calibrating character targets and extracting scenes based on the video stream; the positioning technology of the figure positioning software unit mainly comprises infrared rays, Wifi, Bluetooth, ultrasonic waves and the like. And the figure face identification software unit carries out screening and identification through the similarity of the figures.
The video analysis system follows a design mode of front-end and back-end separation, wherein the front-end and back-end separation refers to the separation of a video stream acquisition end and a video stream analysis end. This is because the computing power and hardware resources of terminals such as network cameras are insufficient, and the performance of the system is reduced when video analysis is directly performed on the cameras. The design mode of separating the front end from the back end can ensure that the camera is only responsible for collecting the video stream, and corresponding video calculation can be carried out in a video stream analysis box, so that the processing performance of the system is improved.
In light of the foregoing description of the preferred embodiment of the present invention, it is to be understood that various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (10)
1. A video analysis software design method based on edge calculation is characterized in that the video software comprises a plurality of sub-software units and comprises the following steps:
s1, software integration: integrating a plurality of sub-software units into a software box;
s2, establishing an edge database: establishing an edge identity database of a face, a fingerprint, voice and a contact way, storing an exclusive serial number of a figure, and realizing real-name authentication;
s3, video acquisition: capturing real-time video streams for shooting limb actions and action tracks of the person after acquiring real-name information of the face, fingerprints or voice of the person;
s4, video edge analysis: and (3) importing the real-time video stream into a software box, analyzing the video stream of character identity information matching, character face, limb action, character action track, character positioning and time by the various sub-software units, judging whether the character action, the limb action or the action track meets the regulations or not, and exporting the analysis result.
2. The method of claim 1, wherein the video analysis software design based on edge calculation is as follows: in S4, the method further includes: edge storage: deleting the video stream which conforms to the specified behavior, action or action track, storing the video stream which does not conform to the specified behavior, action or action track and the character identification data in the edge server, and generating the error information in the edge server.
3. The method of claim 1, wherein the video analysis software design based on edge calculation is as follows: and video analysis results are also retrieved through a client side man-machine: the non-worker client comprises a self-checking web window and a complaint web window; the staff client includes retrieve and remove files.
4. The method of claim 2, wherein the video analysis software design based on edge calculation is as follows: and after receiving the video data, the edge server sends information to a person contact mode and reminds persons.
5. The method of claim 2, wherein the video analysis software design based on edge calculation is as follows: before storing the video stream and the character identification data in the edge server, the video stream is compressed to a certain size, and a uniform video stream format is implemented.
6. The method of claim 1, wherein the video analysis software design based on edge calculation is as follows: the sub-software units at least comprise a figure face recognition software unit, a voice recognition software unit, a fingerprint recognition software unit, a figure positioning software unit, a video analysis software unit and a cache compression software unit.
7. The method of claim 3, wherein the video analysis software design based on edge calculation is as follows: the self-inspection web window comprises real-name data and mistake information of people, and the complaint web window comprises real-time images of decompressed videos and video analysis results.
8. The method of claim 1, wherein the video analysis software design based on edge calculation is as follows: the software box is defined with a video stream import interface and a video stream interface for a plurality of the sub-software units, and the plurality of the sub-software units are in a parallel or serial structure.
9. The method of claim 1, wherein the video analysis software design based on edge calculation is as follows: the software box also comprises a container orchestrator, and the operations capable of being executed comprise resource supervision and task scheduling on the video stream data.
10. The method of claim 1, wherein the video analysis software design based on edge calculation is as follows: the character positioning software unit is based on character target calibration and scene extraction in the video stream.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110191040.9A CN112965693A (en) | 2021-02-19 | 2021-02-19 | Video analysis software design method based on edge calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110191040.9A CN112965693A (en) | 2021-02-19 | 2021-02-19 | Video analysis software design method based on edge calculation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112965693A true CN112965693A (en) | 2021-06-15 |
Family
ID=76285167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110191040.9A Pending CN112965693A (en) | 2021-02-19 | 2021-02-19 | Video analysis software design method based on edge calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112965693A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110399225A (en) * | 2019-07-29 | 2019-11-01 | 中国工商银行股份有限公司 | Monitoring information processing method, system and computer system |
US20190377953A1 (en) * | 2018-06-06 | 2019-12-12 | Seventh Sense Artificial Intelligence Pvt Ltd | Network switching appliance, process and system for performing visual analytics for a streaming video |
CN110795595A (en) * | 2019-09-10 | 2020-02-14 | 安徽南瑞继远电网技术有限公司 | Video structured storage method, device, equipment and medium based on edge calculation |
CN111698470A (en) * | 2020-06-03 | 2020-09-22 | 河南省民盛安防服务有限公司 | Security video monitoring system based on cloud edge cooperative computing and implementation method thereof |
CN112153334A (en) * | 2020-09-15 | 2020-12-29 | 公安部第三研究所 | Intelligent video box equipment for safety management and corresponding intelligent video analysis method |
-
2021
- 2021-02-19 CN CN202110191040.9A patent/CN112965693A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190377953A1 (en) * | 2018-06-06 | 2019-12-12 | Seventh Sense Artificial Intelligence Pvt Ltd | Network switching appliance, process and system for performing visual analytics for a streaming video |
CN110399225A (en) * | 2019-07-29 | 2019-11-01 | 中国工商银行股份有限公司 | Monitoring information processing method, system and computer system |
CN110795595A (en) * | 2019-09-10 | 2020-02-14 | 安徽南瑞继远电网技术有限公司 | Video structured storage method, device, equipment and medium based on edge calculation |
CN111698470A (en) * | 2020-06-03 | 2020-09-22 | 河南省民盛安防服务有限公司 | Security video monitoring system based on cloud edge cooperative computing and implementation method thereof |
CN112153334A (en) * | 2020-09-15 | 2020-12-29 | 公安部第三研究所 | Intelligent video box equipment for safety management and corresponding intelligent video analysis method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108090458B (en) | Human body falling detection method and device | |
WO2021004402A1 (en) | Image recognition method and apparatus, storage medium, and processor | |
CN109117714B (en) | Method, device and system for identifying fellow persons and computer storage medium | |
US8244002B2 (en) | System and method for performing rapid facial recognition | |
CN111242097B (en) | Face recognition method and device, computer readable medium and electronic equipment | |
CN108269333A (en) | Face identification method, application server and computer readable storage medium | |
CN110869937A (en) | Face image duplication removal method and apparatus, electronic device, storage medium, and program | |
CN111091047B (en) | Living body detection method and device, server and face recognition equipment | |
CN110991231B (en) | Living body detection method and device, server and face recognition equipment | |
CN111565225B (en) | Character action track determining method and device | |
CN110414376B (en) | Method for updating face recognition model, face recognition camera and server | |
CN111652331B (en) | Image recognition method and device and computer readable storage medium | |
CN108446681B (en) | Pedestrian analysis method, device, terminal and storage medium | |
CN112446395A (en) | Network camera, video monitoring system and method | |
CN112507314B (en) | Client identity verification method, device, electronic equipment and storage medium | |
CN111177469A (en) | Face retrieval method and face retrieval device | |
CN109902681B (en) | User group relation determining method, device, equipment and storage medium | |
CN111914649A (en) | Face recognition method and device, electronic equipment and storage medium | |
WO2024060951A1 (en) | Servicing method and apparatus for services | |
CN113971831A (en) | Dynamically updated face recognition method and device and electronic equipment | |
CN112907206B (en) | Business auditing method, device and equipment based on video object identification | |
CN112965693A (en) | Video analysis software design method based on edge calculation | |
WO2022089220A1 (en) | Image data processing method and apparatus, device, storage medium, and product | |
CN116189706A (en) | Data transmission method, device, electronic equipment and computer readable storage medium | |
Pflug | Ear recognition: Biometric identification using 2-and 3-dimensional images of human ears |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210615 |
|
RJ01 | Rejection of invention patent application after publication |