CN112861775B - Deep neural network-based consultation personnel identification recording system and method - Google Patents

Deep neural network-based consultation personnel identification recording system and method Download PDF

Info

Publication number
CN112861775B
CN112861775B CN202110243416.6A CN202110243416A CN112861775B CN 112861775 B CN112861775 B CN 112861775B CN 202110243416 A CN202110243416 A CN 202110243416A CN 112861775 B CN112861775 B CN 112861775B
Authority
CN
China
Prior art keywords
module
camera module
image
meeting
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110243416.6A
Other languages
Chinese (zh)
Other versions
CN112861775A (en
Inventor
罗恒阳
程飞
孙志青
邵闻达
殷黎明
张磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Weishitong Intelligent Technology Co ltd
Original Assignee
Suzhou Weishitong Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Weishitong Intelligent Technology Co ltd filed Critical Suzhou Weishitong Intelligent Technology Co ltd
Priority to CN202110243416.6A priority Critical patent/CN112861775B/en
Publication of CN112861775A publication Critical patent/CN112861775A/en
Application granted granted Critical
Publication of CN112861775B publication Critical patent/CN112861775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/10Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people together with the recording, indicating or registering of other data, e.g. of signs of identity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a meeting personnel identification recording system and method based on a deep neural network, which belong to the technical field of meeting attendance management and comprise a camera module, edge computing equipment and storage equipment, wherein the camera module and the edge computing equipment are both arranged in a meeting area, the edge computing equipment comprises a power supply module, an AI core computing module and a network transmission module, the edge computing equipment is connected with the storage equipment through the network transmission module, the camera module acquires videos and pictures in the meeting area, the network module transmits the videos and pictures acquired by the camera module to the storage equipment, and the AI core computing module calls the videos and pictures in the storage equipment to process and detect. The invention has the advantages of high flexibility and convenience, automatic accurate identification and recording of the participants, and contribution to reducing the management difficulty and the management cost of conference organizations.

Description

Deep neural network-based consultation personnel identification recording system and method
Technical Field
The invention relates to the technical field of conference attendance management, in particular to a system and a method for identifying and recording participants based on a deep neural network.
Background
The attendance management is a management system for checking and managing attendance of staff by enterprise public institutions, and comprises three methods of manual form statistics and filling, mobile phone APP (application) participant filling and participant card swiping recording, wherein the method comprises the steps of checking and managing attendance of staff by enterprise public institutions, checking and managing attendance of staff, whether the staff is late to early back, whether the staff is in need of checking or not, and the like, and meeting staff needs to be checked and managed when meetings are carried out at construction sites and the like.
The prior art has the following defects: the manual form statistics and filling needs to be provided with special management personnel to input and sort the information of the participants. The mobile phone APP meeting fills up the places where meeting participants need to wear mobile phones, computers and the like and can bear APP hardware equipment, and the mobile phone APP meeting filling up method is not suitable for places where electronic products are limited to use or inconvenient to use. The participant swipes the card to record that the participant wears a special electronic card, so that the situation that the electronic card is lost and swiped instead of the electronic card is easy to occur. The flexibility and the convenience of the three methods are insufficient, the accuracy is difficult to guarantee, and the problem needs to be solved.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide a participant identification recording system and method based on a deep neural network, which have the advantages of high flexibility and convenience, automatic accurate identification and recording of participants, and contribution to reduction of management difficulty and management cost of conference organizations.
The utility model provides a meeting personnel discernment record system based on degree of depth neural network, includes camera module, edge computing equipment and storage device, camera module and edge computing equipment are all installed in meeting area, edge computing equipment includes power module, AI core operation module and network transmission module, edge computing equipment passes through network transmission module and links to each other with storage device, camera module gathers video and picture in the meeting area, network module transmits the video and the picture that camera module gathered to storage device in, AI core operation module invokes video and picture in the storage device and handles, detects and discerns.
The invention is further provided with: the AI core operation module comprises a model application module, wherein the model application module comprises a detection unit, a first judgment unit and a second judgment unit, an image processing algorithm is used for calling the camera module to intercept each frame of image and input the image into the detection unit, and the detection unit detects the number of people and the position in the image;
the first judging unit sets a starting threshold, the number of people in the image is larger than the starting threshold, the meeting is judged to be started when the distribution of people accords with the meeting characteristics, and the camera module starts video recording;
and the second judging unit sets an ending threshold, judges that the conference is ended when the number of people in the image is smaller than the ending threshold and the personnel distribution does not accord with the conference feature, and the camera module ends video recording and uploads the video recording to the storage device for storage.
The invention is further provided with: the detection unit is connected with a model training module, the model training module comprises a data set making unit and a model training unit, the data set making unit is used for calling video and images in the storage device and converting the video and images into a data set, and the model training unit is used for training the data set and outputting a model to the detection unit.
The invention is further provided with: and the detection unit obtains all segmentation examples meeting the characteristics of the participants by inputting a single picture.
The invention is further provided with: the model training unit is a model trained by special data and a derivative model thereof, and comprises a FASTER-RCNN, SSD, yolo.
The invention is further provided with: the data making unit performs preprocessing and enhancement on the input picture through a computer algorithm, wherein the preprocessing and enhancement content comprises color space conversion, image size conversion and image projection conversion.
The invention is further provided with: the camera module is set as any one image acquisition equipment of a monitoring camera, a wide-angle camera and an infrared camera, the network transmission module is set as a wired network module and a wireless network module, and the storage equipment is set as any one of a hard disk, a memory card, a private cloud server and a public cloud server.
A participant identification recording method based on a deep neural network comprises the following steps:
s1, a data making unit invokes videos and photos uploaded to a storage device by a camera module and performs preprocessing to form a data set;
s2, continuously adjusting super parameters of the deep neural network model of the model training unit, stopping parameter adjustment after the set recognition accuracy and recall rate are met, and deploying the adjusted model to the detection unit
S3: taking a camera module in real time by utilizing a computer vision algorithm, intercepting a model tested in each frame of image input detection unit, identifying the number and the positions of the figures, and outputting the number and the positions of the figures to a first judgment unit;
s4: the number of people in the image is larger than a starting threshold value, and when the personnel distribution accords with the meeting feature, the first judging unit judges that the meeting starts, and the camera module starts video recording;
s5: and when the number of people in the image is smaller than the ending threshold value, the second judging unit judges that the conference is ended when the personnel distribution does not accord with the conference feature, and the camera module ends video recording and stores the video recording to the storage device.
The invention is further arranged that S1 comprises the following steps:
a1: taking a video and a photo shot by a camera module in storage equipment, intercepting the video, and screening a picture with a portrait;
a2: marking the picture by using an image marking tool;
a3: carrying out size modification on the picture by utilizing an image processing algorithm;
a4: performing enhancement operation on the data set, performing geometric transformation, random pruning, standardization and normalization, brightness and contrast adjustment on the picture, and performing scrambling operation on the data set;
a5: the dataset was calculated as m: the proportion of n is divided into a training set and a testing set, and the proportion is divided into 8:2/99:1 according to the data volume;
a6: the data set is converted into the data format required for model training.
In summary, the invention has the following beneficial effects:
1. through installing camera module, edge computing equipment and the storage device's of meeting area setting, can take a picture, discernment and record the personnel of meeting area, effectively reduce meeting organization's management degree of difficulty and management cost.
2. Judging the number and the position of the participant and recording the video through the arrangement of the detection unit, the first judgment unit and the second judgment unit so as to facilitate the follow-up attendance statistics;
3. through the design of the model training module, the accuracy of the detection unit is improved, and the accuracy of attendance statistics is further improved.
Drawings
FIG. 1 is a schematic diagram of the connection of various modules in the present invention;
FIG. 2 is a flow chart for embodying the overall operation of the present invention;
FIG. 3 is a flow chart for embodying the model training module in the present invention.
Detailed Description
Further advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure of the present invention when taken in conjunction with the accompanying drawings. The terms such as "upper", "lower", "left", "right", "middle" and the like are used in the present specification for convenience of description, but are not intended to limit the scope of the present invention, and the relative changes or modifications thereof are also regarded as the scope of the present invention without substantial modification of the technical content.
Examples:
as shown in fig. 1 to 3, the system for identifying and recording participants based on a deep neural network in the invention comprises a camera module, an edge computing device and a storage device, wherein the edge computing device has a storage function and can inquire data in a period of time; the camera module and the edge computing equipment are both installed in a meeting area, the camera module is set to be any one of image acquisition equipment of a monitoring camera, a wide-angle camera and an infrared camera, the network transmission module is set to be a wired network module and a wireless network module, the storage equipment is set to be any one of a hard disk, a memory card, a private cloud server and a public cloud server, and the edge computing equipment transmits a picture video and an identification result to the storage equipment for longer-time storage.
As shown in fig. 1, in this embodiment, the camera module is set as a wide-angle camera, the network transmission module is set as a 4G network module, and the storage device is a public cloud server. The edge computing device comprises a power supply module, an AI core computing module and a network transmission module, wherein the edge computing device is connected with the storage device through the network transmission module, the camera module collects videos and pictures in a meeting area, the network module transmits the videos and pictures collected by the camera module to the storage device, and the AI core computing module calls the videos and pictures in the storage device to process and detect.
As shown in fig. 2 and 3, the AI core operation module includes a model application module, where the model application module includes a detection unit, a first judgment unit and a second judgment unit, and uses an image processing algorithm to invoke a camera module to intercept each frame of image and input the image into the detection unit, where the image processing algorithm in this embodiment is OpenCV, and the detection unit detects the number of people and positions in the image, and the detection unit obtains all segmentation examples meeting the features of the participants by inputting a single picture;
the first judging unit sets a starting threshold, the number of people in the image is larger than the starting threshold, the meeting is judged to be started when the distribution of people accords with the meeting characteristics, and the camera module starts video recording;
the second judging unit sets an ending threshold, the meeting is judged to be ended when the number of people in the image is smaller than the ending threshold and the personnel distribution does not accord with the meeting feature, the camera module ends video recording and uploads the video recording to the storage device for storage, and the starting threshold and the ending threshold are adjusted according to the number of the people in the meeting.
As shown in fig. 2 and 3, the detection unit is connected with a model training module, the model training module includes a data set making unit and a model training unit, the data set making unit retrieves as many videos and images in the storage device as possible to convert the videos and images into a data set, and the model training unit trains the data set and outputs a model to the detection unit. The model training unit is a model trained by special data and a derivative model thereof, including but not limited to FASTER-RCNN, SSD, yolo, and the model network in the embodiment adopts a Yolov4 neural network under the Darknet framework.
The invention also designs a participant identification recording method based on the deep neural network, which comprises the following steps:
s1, a data making unit invokes videos and photos uploaded to a storage device by a camera module and performs preprocessing to form a data set;
s2, continuously adjusting super parameters of the deep neural network model of the model training unit, stopping parameter adjustment after the set recognition accuracy and recall rate are met, and deploying the adjusted model to the detection unit
S3: taking a camera module in real time by utilizing a computer vision algorithm, intercepting a model tested in each frame of image input detection unit, identifying the number and the positions of the figures, and outputting the number and the positions of the figures to a first judgment unit;
s4: the number of people in the image is larger than a starting threshold value, and when the personnel distribution accords with the meeting feature, the first judging unit judges that the meeting starts, and the camera module starts video recording;
s5: and when the number of people in the image is smaller than the ending threshold value, the second judging unit judges that the conference is ended when the personnel distribution does not accord with the conference feature, and the camera module ends video recording and stores the video recording to the storage device.
The invention is further arranged that S1 comprises the following steps:
a1: taking a video and a photo shot by a camera module in storage equipment, intercepting the video, and screening a picture with a portrait;
a2: marking the picture by using an image marking tool;
a3: carrying out size modification on the picture by utilizing an image processing algorithm;
a4: performing enhancement operation on the data set, performing geometric transformation, random pruning, standardization and normalization, brightness and contrast adjustment on the picture, and performing scrambling operation on the data set;
a5: the dataset was calculated as m: the proportion of n is divided into a training set and a testing set, and the proportion is divided into 8:2/99:1 according to the data volume;
a6: the data set is converted into the data format required for model training.
The method comprises the steps that firstly, a site responsible person who builds equipment determines the development position of a meeting, a wide-angle camera and an AI core operation module are installed at a position where the full view of the meeting can be clearly shot, and a storage device which is set as a cloud server acquires real-time video code streams of the camera through a 4G network. The data set making unit downloads videos and pictures uploaded by the camera module from the cloud server as many as possible, wherein the videos and pictures comprise daily states of meeting places without meeting and meeting states, the number and diversity of the data sets are guaranteed, required pictures are intercepted from the videos, and pictures with portraits are screened from all the pictures.
Then labeling, modifying the image size, enhancing, disturbing and dividing the data set, wherein when the data volume is large, the ratio of the training set to the testing set is 99:1, otherwise, the ratio of the training set to the testing set is 8:2, and then the training set to the testing set is converted into the data set with the required format according to the model conversion requirement.
The network selection is carried out, the participants record as a detection task, the target detection depth neural network is adopted, and the real-time requirement is high. Network carrying and configuration work is completed on the computer.
According to the requirements of the Darknet framework, converting the data set into the format of the voc data set, acquiring the corresponding weight of the Yolov4 network from the Darknet official network, then carrying out personalized modification on the network model, modifying the setting of the category and the super-parameter, adjusting the super-parameter according to the training and the effect of the model on the test set to train until the training effect accords with the expectation, wherein the training effect comprises the number, the position, the category and the confidence coefficient, and outputting the trained model to the detection unit.
The image processing algorithm calls a camera module to monitor and intercept images in real time and input the images into a detection unit, the detection unit judges the number of people and the positions of people in the images, and when the first judgment unit judges that the number of people is larger than a starting threshold value, the camera module starts video recording. And when the second judging unit judges that the number of people is smaller than the ending threshold, the camera module ends video recording and transmits the video recording to the storage device through the network module.
The present embodiment is only for explanation of the present invention and is not to be construed as limiting the present invention, and modifications to the present embodiment, which may not creatively contribute to the present invention as required by those skilled in the art after reading the present specification, are all protected by patent laws within the scope of claims of the present invention.

Claims (8)

1. The utility model provides a meeting personnel discernment record system based on degree of depth neural network which characterized in that: the system comprises a camera module, edge computing equipment and storage equipment, wherein the camera module and the edge computing equipment are both arranged in a meeting area, the edge computing equipment comprises a power supply module, an AI core operation module and a network transmission module, the edge computing equipment is connected with the storage equipment through the network transmission module, the camera module collects videos and pictures in the meeting area, the network module transmits the videos and pictures collected by the camera module to the storage equipment, and the AI core operation module calls the videos and pictures in the storage equipment to process, detect and identify;
the AI core operation module comprises a model application module, wherein the model application module comprises a detection unit, a first judgment unit and a second judgment unit, an image processing algorithm is used for calling the camera module to intercept each frame of image and input the image into the detection unit, and the detection unit detects the number of people and the position in the image;
the first judging unit sets a starting threshold, the number of people in the image is larger than the starting threshold, the meeting is judged to be started when the distribution of people accords with the meeting characteristics, and the camera module starts video recording;
and the second judging unit sets an ending threshold, judges that the conference is ended when the number of people in the image is smaller than the ending threshold and the personnel distribution does not accord with the conference feature, and the camera module ends video recording and uploads the video recording to the storage device for storage.
2. The deep neural network-based attendee identification recording system as claimed in claim 1, wherein the system comprises
In the following steps: the detection unit is connected with a model training module, the model training module comprises a data set making unit and a model training unit, the data set making unit is used for calling video and images in the storage device and converting the video and images into a data set, and the model training unit is used for training the data set and outputting a model to the detection unit.
3. The deep neural network-based attendee identification recording system of claim 2, wherein: and the detection unit obtains all segmentation examples meeting the characteristics of the participants by inputting a single picture.
4. A deep neural network based attendee identification recording system as claimed in claim 3, wherein: the model training unit is a model trained by special data and a derivative model thereof, and comprises a FASTER-RCNN, SSD, yolo.
5. The deep neural network-based attendee identification recording system of claim 2, wherein: the data making unit performs preprocessing and enhancement on the input picture through a computer algorithm, wherein the preprocessing and enhancement content comprises color space conversion, image size conversion and image projection conversion.
6. The deep neural network-based attendee identification recording system of claim 5, wherein: the camera module is set as any one image acquisition equipment of a monitoring camera, a wide-angle camera and an infrared camera, the network transmission module is set as a wired network module and a wireless network module, and the storage equipment is set as any one of a hard disk, a memory card, a private cloud server and a public cloud server.
7. The identification recording method of a participant identification recording system based on a deep neural network according to any one of claims 1 to 6, characterized by comprising the steps of:
s1, a data making unit invokes videos and photos uploaded to a storage device by a camera module and performs preprocessing to form a data set;
s2, continuously adjusting super parameters of the deep neural network model of the model training unit, stopping parameter adjustment after the set recognition accuracy and recall rate are met, and deploying the adjusted model to the detection unit
S3: taking a camera module in real time by utilizing a computer vision algorithm, intercepting a model tested in each frame of image input detection unit, identifying the number and the positions of the figures, and outputting the number and the positions of the figures to a first judgment unit;
s4: the number of people in the image is larger than a starting threshold value, and when the personnel distribution accords with the meeting feature, the first judging unit judges that the meeting starts, and the camera module starts video recording;
s5: and when the number of people in the image is smaller than the ending threshold value, the second judging unit judges that the conference is ended when the personnel distribution does not accord with the conference feature, and the camera module ends video recording and stores the video recording to the storage device.
8. The identification recording method as claimed in claim 7, wherein S1 comprises the steps of:
a1: taking a video and a photo shot by a camera module in storage equipment, intercepting the video, and screening a picture with a portrait;
a2: marking the picture by using an image marking tool;
a3: carrying out size modification on the picture by utilizing an image processing algorithm;
a4: performing enhancement operation on the data set, performing geometric transformation, random pruning, standardization and normalization, brightness and contrast adjustment on the picture, and performing scrambling operation on the data set;
a5: the dataset was calculated as m: the proportion of n is divided into a training set and a testing set, and the proportion is divided into 8:2/99:1 according to the data volume;
a6: the data set is converted into the data format required for model training.
CN202110243416.6A 2021-03-05 2021-03-05 Deep neural network-based consultation personnel identification recording system and method Active CN112861775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110243416.6A CN112861775B (en) 2021-03-05 2021-03-05 Deep neural network-based consultation personnel identification recording system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110243416.6A CN112861775B (en) 2021-03-05 2021-03-05 Deep neural network-based consultation personnel identification recording system and method

Publications (2)

Publication Number Publication Date
CN112861775A CN112861775A (en) 2021-05-28
CN112861775B true CN112861775B (en) 2023-10-24

Family

ID=75993747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110243416.6A Active CN112861775B (en) 2021-03-05 2021-03-05 Deep neural network-based consultation personnel identification recording system and method

Country Status (1)

Country Link
CN (1) CN112861775B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954728A (en) * 2014-03-27 2015-09-30 天津科企融生产力促进有限公司 Control system for investment financing conference
CN111047184A (en) * 2019-12-10 2020-04-21 中国银行股份有限公司 Method and device for updating use time of conference room
CN111444389A (en) * 2020-03-27 2020-07-24 焦点科技股份有限公司 Conference video analysis method and system based on target detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104954728A (en) * 2014-03-27 2015-09-30 天津科企融生产力促进有限公司 Control system for investment financing conference
CN111047184A (en) * 2019-12-10 2020-04-21 中国银行股份有限公司 Method and device for updating use time of conference room
CN111444389A (en) * 2020-03-27 2020-07-24 焦点科技股份有限公司 Conference video analysis method and system based on target detection

Also Published As

Publication number Publication date
CN112861775A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN106295567B (en) A kind of localization method and terminal of key point
CN105551104B (en) A kind of automatic attendance checking system in middle and primary schools classroom differentiated based on monitoring image seat
Thomaz et al. Feasibility of identifying eating moments from first-person images leveraging human computation
KR20200098875A (en) System and method for providing 3D face recognition
CN108229376B (en) Method and device for detecting blinking
CN108549854A (en) A kind of human face in-vivo detection method
CN113411542A (en) Intelligent working condition monitoring equipment
WO2019000777A1 (en) Internet-based face beautification system
US11521473B2 (en) Audio/video electronic device
CN104167025A (en) Mobile attendance system based on dynamic face recognition and WIFI positioning and method of attendance system
JP2021531571A (en) Certificate image extraction method and terminal equipment
CN108898079A (en) A kind of monitoring method and device, storage medium, camera terminal
CN109446932A (en) A kind of dressing inspection method, device, terminal device and computer storage medium
CN105022999A (en) Man code company real-time acquisition system
WO2019056503A1 (en) Store monitoring evaluation method, device and storage medium
CN110543811A (en) non-cooperation type examination person management method and system based on deep learning
CN105574500A (en) Method and device for improving face recognition passing rate
CN104361357A (en) Photo set classification system and method based on picture content analysis
Zhou Application research of face recognition technology in smart campus
CN111369170B (en) Bank fine text service evaluation method
CN110310384A (en) A kind of noninductive intelligent Checking on Work Attendance method and its system
CN102063659A (en) Method, server and system for collecting and making electronic photo
CN108229124A (en) A kind of self-service query printing testimony of a witness compares device systems and its method of work
CN112861775B (en) Deep neural network-based consultation personnel identification recording system and method
CN111385530A (en) Intelligent camera combined encryption method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant