CN113822145A - Face recognition operation method based on deep learning - Google Patents

Face recognition operation method based on deep learning Download PDF

Info

Publication number
CN113822145A
CN113822145A CN202110876163.6A CN202110876163A CN113822145A CN 113822145 A CN113822145 A CN 113822145A CN 202110876163 A CN202110876163 A CN 202110876163A CN 113822145 A CN113822145 A CN 113822145A
Authority
CN
China
Prior art keywords
vehicle
face
deep learning
owner
operation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110876163.6A
Other languages
Chinese (zh)
Inventor
韩智伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN202110876163.6A priority Critical patent/CN113822145A/en
Publication of CN113822145A publication Critical patent/CN113822145A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition operation method based on deep learning, which comprises the steps of collecting image data containing a face; training a deep neural network model according to the marking data; starting a camera, transmitting a shot image to a computing unit which is already deployed with a face detection network model by using a trained face detection network model, and judging whether a person entering a vehicle is a vehicle owner; if the person entering the vehicle is identified as the owner, the vehicle can be automatically started; if the person who enters the vehicle is identified as the person other than the owner of the vehicle, the vehicle cannot be started automatically, and a key or other equipment is needed to start the vehicle manually. The invention has the beneficial effects that: when the car owner sits in the car, the camera in the car can automatically start the car when recognizing the car owner, so that the car can be started more intelligently and automatically, and the inconvenience of carrying a car key with the car is avoided.

Description

Face recognition operation method based on deep learning
Technical Field
The invention relates to the technical field of deep learning and artificial intelligence, in particular to a face recognition operation method based on deep learning.
Background
In recent years, the research of face recognition systems began in the 60 s of the 20 th century, and the development of computer technology and optical imaging technology was improved after the 80 s, while the research really enters the early application stage in the later 90 s and mainly takes the technical realization of the U.S., germany and japan as main; the key to the success of the face recognition system is whether the face recognition system has a core algorithm with a sharp end or not, and the recognition result has practical recognition rate and recognition speed; the human face recognition system integrates various professional technologies such as artificial intelligence, machine recognition, machine learning, model theory, expert system and video image processing, and meanwhile, the theory and implementation of intermediate value processing need to be combined, so that the human face recognition system is the latest application of biological feature recognition, the core technology of the human face recognition system is implemented, and the conversion from weak artificial intelligence to strong artificial intelligence is shown.
One solution that has been rapidly developed is a multi-light-source face recognition technology based on active near-infrared images, which can overcome the influence of light changes, has achieved excellent recognition performance, and has overall system performance in terms of accuracy, stability, and speed that exceeds that of three-dimensional image face recognition. The technology is rapidly developed in two or three years, and the face recognition technology gradually becomes practical.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
Therefore, the technical problem solved by the invention is as follows: a face recognition operation method based on deep learning is provided.
In order to solve the technical problems, the invention provides the following technical scheme: a face recognition operation method based on deep learning comprises the steps of collecting image data containing faces; labeling the face picture data, including framing the coordinate position of the picture where the face is located by using a square frame; training a deep neural network model according to the marking data; starting a camera, transmitting a shot image to a computing unit which is already deployed with a face detection network model by using a trained face detection network model, and capturing a face in the image by the computing unit and extracting facial features of the face; the extracted human face features are used for judging whether a person entering the vehicle is the owner or not by calculating the similarity of information of the two human face features; if the person entering the vehicle is identified as the owner, the vehicle can be automatically started; if the person who enters the vehicle is identified as the person other than the owner of the vehicle, the vehicle cannot be started automatically, and a key or other equipment is needed to start the vehicle manually.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: the realization of data collection and data labeling includes but not limited to 1920 × 1080 high-definition monocular camera, and under the indoor and outdoor various different light of day and night, through the same angle with the automobile body camera, the monocular camera is used for continuously shooting the picture data of the face needing to be collected.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: and marking the acquired data, namely framing the human face through the 2D square frame, positioning and marking the human face, and marking the pixel coordinates of the human face in the image.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: the implementation of labeling the data according to the labeled data comprises the steps of using a deep learning framework, building a deep neural network and a human face detection network model, wherein the input parameters of the neural network are images, and the output parameters are predicted pixel coordinate information of the human face in the input images and confidence degrees of corresponding points.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: using softmax-cross entry as a loss function, defined as follows,
softmax function:
Figure BDA0003190399400000021
cross entry function:
Figure BDA0003190399400000022
where L is the loss and Sj is softmax where the jth value of the vector S is output, indicating the probability that this sample belongs to the jth class. yj is preceded by a summation symbol, j also ranges from 1 to the total number of categories T, so the label y is a vector of 1 × T, the T values inside, and only one of them is 1, the other T-1 values are all 0, the value of the corresponding position of the real label is 1, and the others are all 0.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: reading marked data by using a deep learning frame, training a built face detection deep neural network, and calculating the error between a result predicted by a model and a true value through a loss function; updating parameters of the deep neural network model through a gradient optimizer according to the error; until the average precision mean value of the training indexes reaches more than 98 percent.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: deploying the trained deep neural network model to engineering codes, and performing frame-by-frame prediction on pictures shot by a camera, wherein the frame-by-frame prediction comprises the steps of writing and calling the trained deep neural network model by using a C + + language, reading real-time camera frames, performing real-time processing on the read frames by using the deep neural network model, writing processed results of each frame of the deep neural network into an array, and performing logic prediction.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: taking out the obtained result array and comparing the result array with the face features extracted from the car owner photos stored in advance; calculating the similarity by an Euclidean distance method, if the similarity reaches a set threshold value, regarding the result as the identified person as the vehicle owner, and judging the result to be 1; otherwise, the identified person is regarded as a non-owner, and the judgment result is-1.
As a preferred scheme of the deep learning based face recognition operation method of the present invention, wherein: if the judgment result is 1, the vehicle owner entering condition is considered to be met, and the camera unit sends a starting instruction to the vehicle starting system; otherwise, if the judgment result is-1, the automatic starting condition is judged not to be met, and the camera unit does not send the instruction to the vehicle starting unit.
The invention has the beneficial effects that: the face recognition automobile starting system based on computer vision can automatically start an automobile when an automobile owner sits in the automobile, and the automobile can be started more intelligently and automatically when the automobile owner is recognized by the camera in the automobile, so that inconvenience of carrying an automobile key with the automobile is avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic basic flow chart of a face recognition operation method based on deep learning according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an experimental result of a face recognition operation method based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to the schematic diagram of fig. 1, the present embodiment provides a face recognition operation method based on deep learning, a vehicle on the market at present starts a vehicle manually by inserting a key \ sensing key, and the system has the problems that a battery of an automobile key needs to be replaced periodically, the automobile key is easy to lose, and the replacement cost is high.
Specifically, the method comprises the following steps of,
collecting image data containing human faces;
labeling the face picture data, including framing the coordinate position of the picture where the face is located by using a square frame;
training a deep neural network model according to the marking data;
starting a camera, transmitting a shot image to a computing unit which is already deployed with a face detection network model by using a trained face detection network model, and capturing a face in the image by the computing unit and extracting facial features of the face;
the extracted human face features are used for judging whether a person entering the vehicle is the owner or not by calculating the similarity of information of the two human face features;
if the person entering the vehicle is identified as the owner, the vehicle can be automatically started;
if the person who enters the vehicle is identified as the person other than the owner of the vehicle, the vehicle cannot be started automatically, and a key or other equipment is needed to start the vehicle manually.
The implementation of collecting and labeling data includes,
by adopting but not limited to a 1920-1080 high-definition monocular camera, the monocular camera is used for continuously shooting picture data of a human face to be collected through the same angle with a vehicle body camera under various different indoor and outdoor light rays in the daytime and at night.
The labeling of the collected data may include,
and framing the face through the 2D frame, positioning and labeling the face, and marking the pixel coordinates of the face in the image.
Implementations that depend on and label data include,
and (3) constructing a deep neural network and a face detection network model by using an MXNET deep learning framework, wherein input parameters of the neural network are images, and output parameters are predicted pixel coordinate information of the face in the input images and confidence degrees of corresponding points.
Using softmax-cross entry as a loss function, defined as follows,
softmax function:
Figure BDA0003190399400000051
cross entry function:
Figure BDA0003190399400000052
where L is the loss, Sj is the softmax where the jth value of the vector S is output, indicating the probability that this sample belongs to the jth class, yj is preceded by a summation symbol, j also ranges from 1 to the total class number T, so the label y is a vector of 1 × T, the T values inside, and only one of the values is 1, the other T-1 values are all 0, the value of the location corresponding to the true label is 1, and the others are all 0.
Comprises reading marked data by using an MXNET deep learning framework,
training the built face detection deep neural network, and calculating the error between the result predicted by the model and the true value through a loss function;
updating parameters of the deep neural network model through a Stochastic Gradient Descent (SGD) Gradient optimizer according to the size of the error;
until the training index Mean Average Precision (MAP) reaches more than 98%.
Deploying the trained deep neural network model to engineering codes, and predicting pictures shot by a camera frame by frame, including,
writing and calling a trained deep neural network model by using C + + language, reading a real-time camera frame, and performing real-time processing on the read frame by using the deep neural network model;
and compiling the processed results of each frame of the deep neural network into an array for logic prediction.
Taking out the obtained result array and comparing the result array with the face features extracted from the car owner photos stored in advance;
calculating the similarity by a Euclidean distance method, wherein if the similarity reaches a set threshold value, the similarity is more than 99 percent, wherein the calculation of the similarity comprises the following steps:
Figure BDA0003190399400000061
wherein d represents the Euclidean distance, y1nRepresenting a previously entered face feature vector value, y2nRepresenting real-time acquisition of a face feature vector value, wherein m represents iteration times; when d is equal to [0,1 ]]The similarity is 99% or more.
The result can be regarded as the identified person as the owner of the vehicle, and the judgment result is 1;
otherwise, the identified person is regarded as a non-owner, and the judgment result is-1.
If the judgment result is 1, the vehicle owner entering condition is considered to be met, and the camera unit sends a starting instruction to the vehicle starting system;
otherwise, if the judgment result is-1, the automatic starting condition is judged not to be met, and the camera unit does not send the instruction to the vehicle starting unit.
According to the invention, the high-definition monocular camera can be used for continuously acquiring clear face pictures under different light rays, so that a cushion is laid for accurately identifying the subsequent face, the facial features extracted by the deep neural network are compared with the facial features of the car owner stored in advance, so that whether the car owner is the car owner can be quickly and accurately judged, the starting time of the car owner after entering the car is prolonged, and the starting process is simplified.
Example 2
Referring to fig. 2, a second embodiment of the present invention is shown, in order to verify and explain the technical effects adopted in the method, the present embodiment adopts a conventional technical scheme and the method of the present invention to perform a comparison test, and compares the test results by means of scientific demonstration to verify the actual effects of the method.
In order to verify that the method has higher identification accuracy, lower time delay and lower cost compared with the traditional method, the method adopts the traditional method and the method of the invention to respectively compare the starting time and the identification accuracy of a certain vehicle.
And (3) testing environment: the adopted vehicle runs on a simulation platform, a 2.0T engine is carried on the vehicle, the maximum power is 237 horsepower, the maximum torque is 350N m, a 9-speed manual-automatic gearbox is matched with the vehicle, a 1920 x 1080 high-definition monocular camera is adopted, automatic testing equipment is started and MATLB is utilized to simulate under various different light rays indoors and outdoors in the day and at night based on the traditional scheme and the method, 90 groups of data are tested in each method, the output test comparison result is shown in a figure 2 and a table 1, the starting time delay of the traditional method floats up and down in 1.3s, the time delay of the traditional method floats up and down in 0.8s, the starting time delay of the method floats up and down in 0.25s, and the average value of the time delays of the three methods is calculated as shown in the table 2. Wherein, randomly selecting a plurality of groups of experimental results of identification accuracy to perform an example:
table 1: and (5) comparing the experimental results of random group selection.
Figure BDA0003190399400000071
Table 2: the experimental results are shown in a comparison table.
Experimental sample Conventional method 1 Conventional method two The method of the invention
Rate of identification accuracy 98.4% 99.5% 99.9%
Start-up delay 1.3s 0.8s 0.25s
Compared with the traditional method, the method has higher identification accuracy and lower time delay, and the camera in the automobile can quickly identify the owner and automatically start the automobile when the owner sits in the automobile, so that the automobile is more intelligently and automatically started, and the inconvenience of carrying an automobile key with him and unnecessary cost are avoided.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (9)

1. A face recognition operation method based on deep learning is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
collecting image data containing human faces;
labeling the face picture data, including framing the coordinate position of the picture where the face is located by using a square frame;
training a deep neural network model according to the marking data;
starting a camera, transmitting a shot image to a computing unit which is already deployed with a face detection network model by using a trained face detection network model, and capturing a face in the image by the computing unit and extracting facial features of the face;
the extracted human face features are used for judging whether a person entering the vehicle is the owner or not by calculating the similarity of information of the two human face features;
if the person entering the vehicle is identified as the owner, the vehicle can be automatically started;
if the person who enters the vehicle is identified as the person other than the owner of the vehicle, the vehicle cannot be started automatically, and a key or other equipment is needed to start the vehicle manually.
2. The deep learning-based face recognition operation method of claim 1, wherein: the implementation of collecting and labeling data includes,
by adopting but not limited to a 1920-1080 high-definition monocular camera, the monocular camera is used for continuously shooting picture data of a human face to be collected through the same angle with a vehicle body camera under various different indoor and outdoor light rays in the daytime and at night.
3. The deep learning-based face recognition operation method of claim 1 or 2, characterized in that: the labeling of the collected data may include,
and framing the face through the 2D frame, positioning and labeling the face, and marking the pixel coordinates of the face in the image.
4. The deep learning-based face recognition operation method of claim 3, wherein: implementations that depend on and label data include,
and (3) constructing a deep neural network and a face detection network model by using a deep learning framework, wherein input parameters of the neural network are images, and output parameters are predicted pixel coordinate information of the face in the input images and confidence degrees of corresponding points.
5. The deep learning-based face recognition operation method of claim 4, wherein: using softmax-cross entry as a loss function, defined as follows,
softmax function:
Figure FDA0003190399390000011
cross entry function:
Figure FDA0003190399390000012
where L is the loss, Sj is the softmax where the jth value of the vector S is output, indicating the probability that this sample belongs to the jth class, yj is preceded by a summation symbol, j also ranges from 1 to the total class number T, so the label y is a vector of 1 × T, the T values inside, and only one of the values is 1, the other T-1 values are all 0, the value of the location corresponding to the true label is 1, and the others are all 0.
6. The deep learning-based face recognition operation method of claim 4 or 5, wherein: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
the marked data is read by using a deep learning framework,
training the built face detection deep neural network, and calculating the error between the result predicted by the model and the true value through a loss function;
updating parameters of the deep neural network model through a gradient optimizer according to the error;
until the average precision mean value of the training indexes reaches more than 98 percent.
7. The deep learning-based face recognition operation method of claim 6, wherein: deploying the trained deep neural network model to engineering codes, and predicting pictures shot by a camera frame by frame, including,
writing and calling a trained deep neural network model by using C + + language, reading a real-time camera frame, and performing real-time processing on the read frame by using the deep neural network model;
and compiling the processed results of each frame of the deep neural network into an array for logic prediction.
8. The deep learning-based face recognition operation method of claim 7, wherein:
taking out the obtained result array and comparing the result array with the face features extracted from the car owner photos stored in advance;
calculating the similarity by an Euclidean distance method, if the similarity reaches a set threshold value, regarding the result as the identified person as the vehicle owner, and judging the result to be 1;
otherwise, the identified person is regarded as a non-owner, and the judgment result is-1.
9. The deep learning-based face recognition operation method of claim 8, wherein: if the judgment result is 1, the vehicle owner entering condition is considered to be met, and the camera unit sends a starting instruction to the vehicle starting system;
otherwise, if the judgment result is-1, the automatic starting condition is judged not to be met, and the camera unit does not send the instruction to the vehicle starting unit.
CN202110876163.6A 2021-07-30 2021-07-30 Face recognition operation method based on deep learning Pending CN113822145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876163.6A CN113822145A (en) 2021-07-30 2021-07-30 Face recognition operation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876163.6A CN113822145A (en) 2021-07-30 2021-07-30 Face recognition operation method based on deep learning

Publications (1)

Publication Number Publication Date
CN113822145A true CN113822145A (en) 2021-12-21

Family

ID=78924072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876163.6A Pending CN113822145A (en) 2021-07-30 2021-07-30 Face recognition operation method based on deep learning

Country Status (1)

Country Link
CN (1) CN113822145A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114619993A (en) * 2022-03-16 2022-06-14 上海齐感电子信息科技有限公司 Automobile control method based on face recognition, system, equipment and storage medium thereof
CN116962875A (en) * 2023-09-20 2023-10-27 深圳市壹方智能电子科技有限公司 Face recognition self-starting-based vehicle-mounted camera module and control method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005207110A (en) * 2004-01-22 2005-08-04 Omron Corp Vehicle utilization permission system and gate utilization permission system
US7110570B1 (en) * 2000-07-21 2006-09-19 Trw Inc. Application of human facial features recognition to automobile security and convenience
CN202686280U (en) * 2012-06-06 2013-01-23 浙江吉利汽车研究院有限公司杭州分公司 Vehicle anti-theft and start-up system based on face recognition
CN104228767A (en) * 2014-07-30 2014-12-24 哈尔滨工业大学深圳研究生院 Palm print authentication-based car starting method
CN108638877A (en) * 2018-04-28 2018-10-12 北京新能源汽车股份有限公司 A kind of vehicle starting method, device and electric vehicle
WO2019231105A1 (en) * 2018-05-31 2019-12-05 한국과학기술원 Method and apparatus for learning deep learning model for ordinal classification problem by using triplet loss function
CN111461001A (en) * 2020-03-31 2020-07-28 桂林电子科技大学 Computer vision automatic door opening method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7110570B1 (en) * 2000-07-21 2006-09-19 Trw Inc. Application of human facial features recognition to automobile security and convenience
JP2005207110A (en) * 2004-01-22 2005-08-04 Omron Corp Vehicle utilization permission system and gate utilization permission system
CN202686280U (en) * 2012-06-06 2013-01-23 浙江吉利汽车研究院有限公司杭州分公司 Vehicle anti-theft and start-up system based on face recognition
CN104228767A (en) * 2014-07-30 2014-12-24 哈尔滨工业大学深圳研究生院 Palm print authentication-based car starting method
CN108638877A (en) * 2018-04-28 2018-10-12 北京新能源汽车股份有限公司 A kind of vehicle starting method, device and electric vehicle
WO2019231105A1 (en) * 2018-05-31 2019-12-05 한국과학기술원 Method and apparatus for learning deep learning model for ordinal classification problem by using triplet loss function
CN111461001A (en) * 2020-03-31 2020-07-28 桂林电子科技大学 Computer vision automatic door opening method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114619993A (en) * 2022-03-16 2022-06-14 上海齐感电子信息科技有限公司 Automobile control method based on face recognition, system, equipment and storage medium thereof
CN114619993B (en) * 2022-03-16 2023-06-16 上海齐感电子信息科技有限公司 Automobile control method based on face recognition, system, equipment and storage medium thereof
CN116962875A (en) * 2023-09-20 2023-10-27 深圳市壹方智能电子科技有限公司 Face recognition self-starting-based vehicle-mounted camera module and control method thereof
CN116962875B (en) * 2023-09-20 2024-03-01 深圳市壹方智能电子科技有限公司 Face recognition self-starting-based vehicle-mounted camera module and control method thereof

Similar Documents

Publication Publication Date Title
CN112926405B (en) Method, system, equipment and storage medium for detecting wearing of safety helmet
CN110135295A (en) A kind of unsupervised pedestrian recognition methods again based on transfer learning
CN109359697A (en) Graph image recognition methods and inspection system used in a kind of power equipment inspection
CN107977656A (en) A kind of pedestrian recognition methods and system again
CN109214001A (en) A kind of semantic matching system of Chinese and method
CN113822145A (en) Face recognition operation method based on deep learning
CN110598535A (en) Face recognition analysis method used in monitoring video data
CN110097029B (en) Identity authentication method based on high way network multi-view gait recognition
CN109740479A (en) A kind of vehicle recognition methods, device, equipment and readable storage medium storing program for executing again
CN112329536A (en) Single-sample face recognition method based on alternative pair anti-migration learning
CN112183438B (en) Image identification method for illegal behaviors based on small sample learning neural network
CN113269070A (en) Pedestrian re-identification method fusing global and local features, memory and processor
CN116524189A (en) High-resolution remote sensing image semantic segmentation method based on coding and decoding indexing edge characterization
CN111507353A (en) Chinese field detection method and system based on character recognition
CN116977937A (en) Pedestrian re-identification method and system
CN111797705A (en) Action recognition method based on character relation modeling
CN113505719B (en) Gait recognition model compression system and method based on local-integral combined knowledge distillation algorithm
CN113762166A (en) Small target detection improvement method and system based on wearable equipment
CN116543269B (en) Cross-domain small sample fine granularity image recognition method based on self-supervision and model thereof
CN115797884B (en) Vehicle re-identification method based on human-like visual attention weighting
CN116229511A (en) Identification re-recognition method based on golden monkey trunk feature extraction
CN111144233B (en) Pedestrian re-identification method based on TOIM loss function
CN111428675A (en) Pedestrian re-recognition method integrated with pedestrian posture features
CN114220013A (en) Camouflaged object detection method based on boundary alternating guidance
CN114022831A (en) Binocular vision-based livestock body condition monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination