CN105389549B - Object identifying method and device based on human action feature - Google Patents

Object identifying method and device based on human action feature Download PDF

Info

Publication number
CN105389549B
CN105389549B CN201510713010.4A CN201510713010A CN105389549B CN 105389549 B CN105389549 B CN 105389549B CN 201510713010 A CN201510713010 A CN 201510713010A CN 105389549 B CN105389549 B CN 105389549B
Authority
CN
China
Prior art keywords
human body
human
action feature
key point
point information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510713010.4A
Other languages
Chinese (zh)
Other versions
CN105389549A (en
Inventor
曹科垒
张弛
印奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Beijing Maigewei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd, Beijing Maigewei Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201510713010.4A priority Critical patent/CN105389549B/en
Publication of CN105389549A publication Critical patent/CN105389549A/en
Application granted granted Critical
Publication of CN105389549B publication Critical patent/CN105389549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of object identifying method and device based on human action feature.The object identifying method includes: to receive the acquisition device multiframe human body image collected about personage to be identified, and human body region is intercepted from an at least frame for the multiframe human body image;The key point information of human body is extracted from the human body region intercepted;And who object recognition result is obtained using trained model based on extracted key point information.Object identifying method and device provided by the invention based on human action feature is based on human action feature and carries out who object identification, can make up the simple defect for using face recognition technology, improve the accuracy rate of who object identification.

Description

Object identifying method and device based on human action feature
Technical field
The present invention relates to technical field of image processing, in particular to a kind of Object identifying based on human action feature Method and device.
Background technique
The application of today's society, face recognition technology starts to popularize, such as access control system, monitoring system etc..Recognition of face It is a kind of biological identification technology for carrying out identification based on facial feature information of people.Face is inherent, its uniqueness Necessary premise is provided with the superperformance for being not easy to be replicated for identity identification.
However, face recognition technology has the defect for being difficult to overcome.For example, when object to be identified carried out it is facial change compared with When big makeup, the effect of recognition of face can sharply decline.For another example, in the case where shooting is less than face, recognition of face does not just have Method works.Therefore, it is necessary to can make up the skill of the simple defect that who object identification is carried out using face recognition technology Art.
Summary of the invention
In view of the deficiencies of the prior art, on the one hand, the present invention provides a kind of Object identifying side based on human action feature Method, the object identifying method include: the reception acquisition device multiframe human body image collected about personage to be identified, and from Human body region is intercepted in an at least frame for the multiframe human body image;Human body is extracted from the human body region intercepted Key point information;And who object recognition result is obtained using trained model based on extracted key point information.
In one embodiment of the invention, described that personage is obtained using trained model based on the key point information Object identifying result, comprising: human action feature is obtained using trained model based on the key point information, and is based on The human action feature obtains who object recognition result.
In one embodiment of the invention, the key point information includes human skeleton information, and described based on institute It states key point information and obtains human action feature using trained model, and personage is obtained based on the human action feature Object identifying result, comprising: the human skeleton information in the predetermined time is integrated into tensor and is input to convolutional neural networks progress Processing;The output of the convolutional neural networks is input to recurrent neural network to handle, to obtain human action feature;With And the human action feature is input to multi-layer perception (MLP) and is mapped to person recognition label, to obtain the who object identification As a result.
Illustratively, described the step of extracting the key point information of human body from the human body region intercepted, uses object Body characteristics extractive technique is realized;And/or described the step of human body region is intercepted from the multiframe human body image, uses object Body identification technology is realized.
Illustratively, the acquisition device is depth camera.
On the other hand, the present invention also provides a kind of object recognition equipment based on human action feature, the Object identifyings Device includes: image processing module, for receiving the acquisition device multiframe human body image collected about personage to be identified, and Human body region is intercepted from an at least frame for the multiframe human body image;Skeleton refines module, for from described image The human body region that reason module is intercepted extracts the key point information of human body;And person recognition module, for being based on institute It states the skeleton refinement extracted key point information of module and obtains who object recognition result using trained model.
In one embodiment of the invention, the person recognition module is further used for: being based on the key point information Human action feature is obtained using trained model, and who object identification knot is obtained based on the human action feature Fruit.
In one embodiment of the invention, the key point information includes human skeleton information, and the personage knows Other module is further used for: the human skeleton information in the predetermined time being integrated into tensor and is input at convolutional neural networks Reason;The output of the convolutional neural networks is input to recurrent neural network to handle, to obtain human action feature;And The human action feature is input to multi-layer perception (MLP) and is mapped to person recognition label, to obtain the who object identification knot Fruit.
Illustratively, the skeleton is refined module and is extracted using object features extractive technique from the human body region The key point information of human body;And/or described image processing module is cut from the multiframe human body image using object recognition technique Take human body region.
Illustratively, the acquisition device is depth camera.
Object identifying method and device provided by the invention based on human action feature is carried out based on human action feature Who object identification can make up the simple defect for using face recognition technology, improve the accuracy rate of who object identification.
Detailed description of the invention
Following drawings of the invention is incorporated herein as part of the present invention for the purpose of understanding the present invention.Shown in the drawings of this hair Bright embodiment and its description, principle used to explain the present invention.
In attached drawing:
Fig. 1 shows the flow chart of object identifying method according to an embodiment of the present invention, based on human action feature;And
Fig. 2 shows the structural block diagrams of object recognition equipment according to an embodiment of the present invention, based on human action feature.
Specific embodiment
In the following description, a large amount of concrete details are given so as to provide a more thorough understanding of the present invention.So And it is obvious to the skilled person that the present invention may not need one or more of these details and be able to Implement.In other examples, in order to avoid confusion with the present invention, for some technical characteristics well known in the art not into Row description.
It should be understood that the present invention can be implemented in different forms, and should not be construed as being limited to propose here Embodiment.On the contrary, provide these embodiments will make it is open thoroughly and completely, and will fully convey the scope of the invention to Those skilled in the art.
The purpose of term as used herein is only that description specific embodiment and not as limitation of the invention.Make herein Used time, " one " of singular, "one" and " described/should " be also intended to include plural form, unless the context clearly indicates separately Outer mode.It is also to be understood that term " composition " and/or " comprising ", when being used in this specification, determines the feature, whole The presence of number, step, operations, elements, and/or components, but be not excluded for one or more other features, integer, step, operation, The presence or addition of component, assembly unit and/or group.Herein in use, term "and/or" includes any of related listed item and institute There is combination.
In order to thoroughly understand the present invention, detailed step and detailed structure will be proposed in following description, so as to Illustrate technical solution of the present invention.Presently preferred embodiments of the present invention is described in detail as follows, however other than these detailed descriptions, this Invention can also have other embodiments.
The embodiment of the present invention provides a kind of object identifying method based on human action feature, the object identifying method base Who object identification is carried out using trained model in the key point information of human body.The body structure of different people has differences, There is also differences for motor habit, therefore can carry out Object identifying by the common action of people, so as to make up simple use The defect of face recognition technology.
In the following, specifically describing the Object identifying side according to an embodiment of the present invention based on human action feature referring to Fig.1 Method.Fig. 1 shows the flow chart of object identifying method 100 according to an embodiment of the present invention, based on human action feature.Such as figure Shown in 1, object identifying method 100 includes the following steps:
Step 101: receiving the acquisition device multiframe human body image collected about personage to be identified, and from multiframe human body Human body region is intercepted in an at least frame for image.
Wherein, acquisition device can be video camera, it is preferable that the video camera can be depth camera, with more accurately The key point for positioning human body helps to promote accuracy, video is shot in real time, to obtain the multiframe people of personage to be identified Body image.Further, at least frame human body image in multiframe human body image is collected to video camera to handle, for example, Using object recognition technique, human body region is therefrom intercepted out, it is possible in accordance with a preferred embodiment of the present invention to multiframe human body Each frame human body image in image is handled, in this way, the related data of more perfect human body region can be intercepted out.
Step 102: the key point information of human body is extracted from the human body region intercepted.
Illustratively, the key of human body can be further extracted from human body region using object features extraction technique Point information.Wherein, extracted key point information can be human skeleton information.
Step 103: who object recognition result is obtained using trained model based on extracted key point information.
In this step, it can use the key point information that trained model will extract in a step 102, for example, people Body framework information, is integrated into continuous skeleton spatial information, which can be considered as the skeleton of continuous a period of time Motion profile, i.e. human action feature, and then who object recognition result is obtained based on human action feature.Illustratively, exist Model involved in this step can obtain model by the training of the relevant action image or video of a large amount of personage, the mould Type may include convolutional neural networks, recurrent neural network and multi-layer perception (MLP), be based on this, and step 103 may include: will be predetermined Human skeleton information in time, which is integrated into tensor and is input to convolutional neural networks, to be handled;By the output of convolutional neural networks It is input to recurrent neural network to be handled, to obtain human action feature;By the output of recurrent neural network, i.e. human action Feature is input to multi-layer perception (MLP) and is mapped to person recognition label, to obtain who object recognition result.
Step 103 is further illustrated below with reference to specific example.Illustratively, in one embodiment of the invention, Human skeleton information in predetermined time (such as in recent 30 seconds) can be integrated into a tensor (T, V), wherein T is indicated The frame number of the human body image acquired in predetermined time, V indicate number (the i.e. human skeleton information of the human skeleton information of each frame Data length), and tensor (T, V) is input to convolutional neural networks, and convolution kernel size is then set, such as can will Convolution kernel is dimensioned to (5,1), that is, the human skeleton information of every 5 frame is once converted, and be mapped to 32 parts, then The output of convolutional neural networks is (T/5, V*32), and then, tensor (T/5, V*32) is passed as being input in recurrent neural network Neural network is returned to be made of the opposite recurrent neural network unit of both direction, study includes the rule in human skeleton information Rule, obtains human action feature.Finally, human action feature is mapped to person recognition label by multi-layer perception (MLP), people is obtained Object Object identifying is as a result, to realize the Object identifying based on human action feature.
Object identifying method according to the above embodiment of the present invention based on human action feature is based on human action feature Who object identification is carried out, the simple defect for using face recognition technology can be made up, improve the accuracy rate of who object identification.
Another embodiment of the present invention provides a kind of object recognition equipment based on human action feature.Fig. 2 shows The structural block diagram of object recognition equipment 200 according to an embodiment of the present invention, based on human action feature.As shown in Fig. 2, object Identification device 200 includes image processing module 201, skeleton refinement module 202 and person recognition module 203.
Wherein, image processing module 201 refines module 202 with outer harvesting portion and skeleton and is connected, and adopts for receiving Human body is intercepted in the acquisition means multiframe human body image collected about personage to be identified, at least frame from multiframe human body image Region and by received multiple image and the human body region that is intercepted be output to skeleton and refine module 202.Skeleton It refines module 202 to be connected with image processing module 201 and person recognition module 203, for cutting from image processing module 201 The human body region taken extracts the key point information of human body, and extracted key point information is output to person recognition mould Block 203.Person recognition module 203 refines module 202 with skeleton and is connected, extracted for refining module 202 based on skeleton Key point information obtains who object recognition result using trained model and exports the recognition result.
Optionally, above-mentioned acquisition device may include in object recognition equipment 200.Optionally, object recognition equipment 200 It can also include output device (not shown in FIG. 2), can be connected with person recognition module 203, for exporting personage The recognition result that identification module 203 is exported.Or the output device can also be located at the outside of object recognition equipment 200.Show Example property, which can be display, the recognition result exported for showing person recognition module 203.Another In a example, which can be loudspeaker, the recognition result exported for playing person recognition module 203.
In one embodiment of the invention, person recognition module 203 is further used for: utilizing instruction based on key point information The model perfected obtains human action feature, and obtains who object recognition result based on human action feature.
In one embodiment of the invention, it may include human body that skeleton, which refines the extracted key point information of module 202, Framework information, and person recognition module 203 can be further used for: the human skeleton information in the predetermined time is integrated into tensor Convolutional neural networks are input to be handled;The output of convolutional neural networks is input to recurrent neural network to handle, with Obtain human action feature;And human action feature is input to multi-layer perception (MLP) and is mapped to person recognition label, to obtain Who object recognition result.
Illustratively, skeleton is refined module 202 and can be mentioned using object features extractive technique from the human body region Take out the key point information of human body.Illustratively, image processing module 201 can use object recognition technique from multiframe human figure Human body region is intercepted in an at least frame for picture.Illustratively, acquisition device can be depth camera.
The detailed process of above-mentioned each module operation can be understood with reference to the embodiment of Fig. 1 description, details are not described herein again.
The modules of the embodiment of the present invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) realize the object according to an embodiment of the present invention based on human action feature The some or all functions of some or all components in identification device.The present invention is also implemented as executing here Some or all device or device programs of described method are (for example, computer program and computer program produce Product).It is such to realize that program of the invention can store on a computer-readable medium, or can have one or more The form of signal.Such signal can be downloaded from an internet website to obtain, and perhaps provide on memory carrier or to appoint What other forms provides.
The present invention has been explained by the above embodiments, but it is to be understood that, above-described embodiment is only intended to The purpose of citing and explanation, is not intended to limit the invention to the scope of the described embodiments.Furthermore those skilled in the art It is understood that the present invention is not limited to the above embodiments, introduction according to the present invention can also be made more kinds of member Variants and modifications, all fall within the scope of the claimed invention for these variants and modifications.Protection scope of the present invention by The appended claims and its equivalent scope are defined.

Claims (8)

1. a kind of object identifying method based on human action feature, which is characterized in that the object identifying method includes:
Receive the acquisition device multiframe human body image collected about personage to be identified, and from the multiframe human body image to Human body region is intercepted in a few frame;
The key point information of human body is extracted from the human body region intercepted, the key point information includes human skeleton letter Breath;And
Human action feature is obtained using trained model based on the key point information, and special based on the human action Obtain who object recognition result;
Wherein, based on the key point information using trained model obtain human action feature include: will be in the predetermined time Human skeleton information be integrated into tensor and be input to convolutional neural networks and handled;The output of the convolutional neural networks is defeated Enter to recurrent neural network and handled, to obtain human action feature.
2. object identifying method as described in claim 1, which is characterized in that obtain personage couple based on the human action feature As recognition result includes:
The human action feature is input to multi-layer perception (MLP) and is mapped to person recognition label, is known with obtaining the who object Other result.
3. the object identifying method as described in any one of claims 1 to 2, which is characterized in that
Described the step of extracting the key point information of human body from the human body region intercepted, extracts skill using object features Art is realized;And/or
The step of human body region is intercepted in an at least frame from the multiframe human body image uses object recognition technique It realizes.
4. the object identifying method as described in any one of claims 1 to 2, which is characterized in that the acquisition device is depth Video camera.
5. a kind of object recognition equipment based on human action feature, which is characterized in that the object recognition equipment includes:
Image processing module, for receiving the acquisition device multiframe human body image collected about personage to be identified, and from institute State interception human body region in an at least frame for multiframe human body image;
Skeleton refines module, and the human body region for being intercepted from described image processing module extracts the key point of human body Information, the key point information include human skeleton information;And
Person recognition module, for obtaining human action feature using trained model based on the key point information, and Who object recognition result is obtained based on the human action feature;
Wherein, described to be based on the key point information to obtain human action feature using trained model including: by pre- timing In human skeleton information be integrated into tensor and be input to convolutional neural networks and handled;By the defeated of the convolutional neural networks It is input to recurrent neural network out to be handled, to obtain human action feature.
6. object recognition equipment as claimed in claim 5, which is characterized in that
The person recognition module is further used for:
The human action feature is input to multi-layer perception (MLP) and is mapped to person recognition label, is known with obtaining the who object Other result.
7. the object recognition equipment as described in any one of claim 5 to 6, which is characterized in that
The skeleton refines the key point that module extracts human body using object features extractive technique from the human body region Information;And/or
Described image processing module intercepts human body region using object recognition technique from the multiframe human body image.
8. the object recognition equipment as described in any one of claim 5 to 6, which is characterized in that the acquisition device is deep Spend video camera.
CN201510713010.4A 2015-10-28 2015-10-28 Object identifying method and device based on human action feature Active CN105389549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510713010.4A CN105389549B (en) 2015-10-28 2015-10-28 Object identifying method and device based on human action feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510713010.4A CN105389549B (en) 2015-10-28 2015-10-28 Object identifying method and device based on human action feature

Publications (2)

Publication Number Publication Date
CN105389549A CN105389549A (en) 2016-03-09
CN105389549B true CN105389549B (en) 2019-08-13

Family

ID=55421821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510713010.4A Active CN105389549B (en) 2015-10-28 2015-10-28 Object identifying method and device based on human action feature

Country Status (1)

Country Link
CN (1) CN105389549B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893937A (en) * 2016-03-28 2016-08-24 联想(北京)有限公司 Image identification method and apparatus
CN107273782B (en) * 2016-04-08 2022-12-16 微软技术许可有限责任公司 Online motion detection using recurrent neural networks
CN107644190A (en) * 2016-07-20 2018-01-30 北京旷视科技有限公司 Pedestrian's monitoring method and device
CN108171244A (en) * 2016-12-07 2018-06-15 北京深鉴科技有限公司 Object identifying method and system
CN106682620A (en) * 2016-12-28 2017-05-17 北京旷视科技有限公司 Human face image acquisition method and device
CN107423721A (en) * 2017-08-08 2017-12-01 珠海习悦信息技术有限公司 Interactive action detection method, device, storage medium and processor
CN109670520B (en) * 2017-10-13 2021-04-09 杭州海康威视数字技术股份有限公司 Target posture recognition method and device and electronic equipment
CN108875501B (en) * 2017-11-06 2021-10-15 北京旷视科技有限公司 Human body attribute identification method, device, system and storage medium
CN107832708A (en) * 2017-11-09 2018-03-23 云丁网络技术(北京)有限公司 A kind of human motion recognition method and device
CN107832799A (en) * 2017-11-20 2018-03-23 北京奇虎科技有限公司 Object identifying method and device, computing device based on camera scene
CN109889773A (en) * 2017-12-06 2019-06-14 中国移动通信集团四川有限公司 Method, apparatus, equipment and the medium of the monitoring of assessment of bids room personnel
US10373332B2 (en) 2017-12-08 2019-08-06 Nvidia Corporation Systems and methods for dynamic facial analysis using a recurrent neural network
CN108334863B (en) 2018-03-09 2020-09-04 百度在线网络技术(北京)有限公司 Identity authentication method, system, terminal and computer readable storage medium
CN109165552B (en) * 2018-07-14 2021-02-26 深圳神目信息技术有限公司 Gesture recognition method and system based on human body key points and memory
CN109101901B (en) * 2018-07-23 2020-10-27 北京旷视科技有限公司 Human body action recognition method and device, neural network generation method and device and electronic equipment
CN109359543B (en) * 2018-09-19 2021-10-01 武汉烽火众智数字技术有限责任公司 Portrait retrieval method and device based on skeletonization
CN109740446A (en) * 2018-12-14 2019-05-10 深圳壹账通智能科技有限公司 Classroom students ' behavior analysis method and device
CN111368594B (en) * 2018-12-26 2023-07-18 中国电信股份有限公司 Method and device for detecting key points
CN110532874B (en) * 2019-07-23 2022-11-11 深圳大学 Object attribute recognition model generation method, storage medium and electronic device
CN111259751B (en) * 2020-01-10 2023-08-29 北京百度网讯科技有限公司 Human behavior recognition method, device, equipment and storage medium based on video
CN111833397B (en) * 2020-06-08 2022-11-29 西安电子科技大学 Data conversion method and device for orientation-finding target positioning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426020A (en) * 2001-12-13 2003-06-25 中国科学院自动化研究所 Far distance identity identifying method based on walk
CN101661554A (en) * 2009-09-29 2010-03-03 哈尔滨工程大学 Front face human body automatic identity recognition method under long-distance video
CN103729614A (en) * 2012-10-16 2014-04-16 上海唐里信息技术有限公司 People recognition method and device based on video images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9400925B2 (en) * 2013-11-15 2016-07-26 Facebook, Inc. Pose-aligned networks for deep attribute modeling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426020A (en) * 2001-12-13 2003-06-25 中国科学院自动化研究所 Far distance identity identifying method based on walk
CN101661554A (en) * 2009-09-29 2010-03-03 哈尔滨工程大学 Front face human body automatic identity recognition method under long-distance video
CN103729614A (en) * 2012-10-16 2014-04-16 上海唐里信息技术有限公司 People recognition method and device based on video images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Deep Convolutional Neural Networks;Tomas Pfister;《Asian Conference on Computer Vision 2014》;20150416;第538-552页

Also Published As

Publication number Publication date
CN105389549A (en) 2016-03-09

Similar Documents

Publication Publication Date Title
CN105389549B (en) Object identifying method and device based on human action feature
Jalal et al. A wrist worn acceleration based human motion analysis and classification for ambient smart home system
CN105160318B (en) Lie detecting method based on facial expression and system
CN106454481B (en) A kind of method and device of live broadcast of mobile terminal interaction
CN105902257B (en) Sleep state analysis method and device, intelligent wearable device
CN104795067B (en) Voice interactive method and device
CN110941990A (en) Method and device for evaluating human body actions based on skeleton key points
CN110427859A (en) A kind of method for detecting human face, device, electronic equipment and storage medium
CN108596041B (en) A kind of human face in-vivo detection method based on video
CN108229376B (en) Method and device for detecting blinking
CN109815776B (en) Action prompting method and device, storage medium and electronic device
EP3531342A3 (en) Method, apparatus and system for human body tracking processing
CN106272446B (en) The method and apparatus of robot motion simulation
CN109558892A (en) A kind of target identification method neural network based and system
CN109740019A (en) A kind of method, apparatus to label to short-sighted frequency and electronic equipment
CN109978975A (en) A kind of moving method and device, computer equipment of movement
CN102567716A (en) Face synthetic system and implementation method
CN113365147A (en) Video editing method, device, equipment and storage medium based on music card point
CN110188610A (en) A kind of emotional intensity estimation method and system based on deep learning
CN109583364A (en) Image-recognizing method and equipment
CN109963114A (en) One kind is had dinner detection device, method, server and system
CN110210449A (en) A kind of face identification system and method for virtual reality friend-making
CN112668492A (en) Behavior identification method for self-supervised learning and skeletal information
CN108614987A (en) The method, apparatus and robot of data processing
CN107168536A (en) Examination question searching method, examination question searcher and electric terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant after: MEGVII INC.

Applicant after: Beijing maigewei Technology Co., Ltd.

Address before: 100190 Beijing, Haidian District Academy of Sciences, South Road, No. 2, block A, No. 313

Applicant before: MEGVII INC.

Applicant before: Beijing aperture Science and Technology Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant