CN111191603A - Method and device for identifying people in vehicle, terminal equipment and medium - Google Patents

Method and device for identifying people in vehicle, terminal equipment and medium Download PDF

Info

Publication number
CN111191603A
CN111191603A CN201911406966.4A CN201911406966A CN111191603A CN 111191603 A CN111191603 A CN 111191603A CN 201911406966 A CN201911406966 A CN 201911406966A CN 111191603 A CN111191603 A CN 111191603A
Authority
CN
China
Prior art keywords
area
vehicle
personnel
region
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911406966.4A
Other languages
Chinese (zh)
Other versions
CN111191603B (en
Inventor
彭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911406966.4A priority Critical patent/CN111191603B/en
Publication of CN111191603A publication Critical patent/CN111191603A/en
Application granted granted Critical
Publication of CN111191603B publication Critical patent/CN111191603B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of face recognition, and particularly relates to a method, a device, terminal equipment and a medium for recognizing people in a vehicle. The method comprises the steps of obtaining a vehicle image, and identifying a vehicle area and a personnel area in the vehicle image, wherein the personnel area comprises a human face area and/or a human body area; if the personnel area is in the vehicle area, acquiring a proportional coefficient of the area of the personnel area and the area of the vehicle area; and if the proportion coefficient is larger than a proportion threshold value, determining that the person corresponding to the person area is the person in the vehicle. The personnel in the vehicle and the personnel outside the vehicle are determined according to the relation between the personnel area and the vehicle area, the personnel identification and the vehicle identification are combined, the classification of the personnel category in the image is realized, the personnel in the vehicle are prevented from being marked manually, the efficiency is high, the abnormal personnel area is removed according to the size relation between the personnel area and the vehicle area, the misjudgment rate is reduced, and the personnel and vehicle corresponding identification work of a large number of monitoring video pictures is facilitated.

Description

Method and device for identifying people in vehicle, terminal equipment and medium
Technical Field
The application belongs to the technical field of face recognition, and particularly relates to a method, a device, terminal equipment and a medium for recognizing people in a vehicle.
Background
In the modern traffic control system, the monitoring equipment through the traffic control system gathers the image of vehicle in service, and use image recognition technology to carry out face identification, license plate discernment etc. to the vehicle image, thereby realize traffic management and control, functions such as illegal violation discernment, however, carry out face identification to the image, only can analyze out face information, because probably contain more face information in the image of gathering, face identification and vehicle discernment are mutually independent, divide into pedestrian, driver etc. with face identification's result by the artificial check after two kinds of discernments are all accomplished, degree of automation is lower, the efficiency is lower, be unfavorable for the development of the work of checking of a large amount of control video pictures.
Disclosure of Invention
The embodiment of the application provides a method, a device, terminal equipment and a medium for identifying people in a vehicle, and can solve the problem that the classification efficiency of people is low due to the fact that the existing face identification method and the existing vehicle identification method are mutually independent.
In a first aspect, an embodiment of the present application provides an in-vehicle person identification method, where the in-vehicle person identification method includes:
acquiring a vehicle image;
identifying a vehicle region and a personnel region in the vehicle image, wherein the personnel region comprises a human face region and/or a human body region;
if the personnel area is in the vehicle area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle area;
and if the proportion coefficient is larger than a proportion threshold value, determining that the person corresponding to the person area is the person in the vehicle.
In a second aspect, an embodiment of the present application provides an in-vehicle person identification apparatus, where the in-vehicle person identification apparatus includes:
the image acquisition module is used for acquiring a vehicle image;
the region identification module is used for identifying a vehicle region and a personnel region in the vehicle image, wherein the personnel region comprises a human face region and/or a human body region;
the proportion coefficient acquisition module is used for acquiring the proportion coefficient of the area areas of the personnel area and the vehicle area if the personnel area is in the vehicle area;
and the in-vehicle personnel judgment module is used for judging that the personnel corresponding to the personnel area is the in-vehicle personnel if the proportion coefficient is larger than a proportion threshold value.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program that is stored in the memory and is executable on the processor, and when the processor executes the computer program, the method for identifying a person in a vehicle in the first aspect is implemented.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the method for identifying an occupant in a vehicle in the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the method for identifying an in-vehicle person as described in any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the advantages that: the image recognition technology is used for recognizing the vehicle region and the personnel region in the vehicle image, the personnel in the vehicle and the personnel outside the vehicle are determined according to the relation between the personnel region and the vehicle region, the personnel recognition and the vehicle recognition are combined, the classification of the personnel category in the image is realized, the personnel in the vehicle is prevented from being marked artificially, the efficiency is high, the abnormal personnel region is removed according to the size relation between the personnel region and the vehicle region, the misjudgment rate is reduced, the personnel and vehicle corresponding recognition work for monitoring a large number of video pictures is facilitated, for example, the vehicle corresponds to the driver of the vehicle, whether the driver is a vehicle owner or not is judged according to the information of the driver and the information of the vehicle, and whether the driver is driving without a license or not is judged.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of an in-vehicle occupant identification method according to an embodiment of the present application;
FIG. 2 is a schematic view of a vehicle image of a method for identifying a person in a vehicle according to an embodiment of the present application;
FIG. 3 is a schematic view of another vehicle image of a method for identifying a person in a vehicle according to an embodiment of the present application;
fig. 4 is a schematic flowchart of an in-vehicle person identification method provided in the second embodiment of the present application;
fig. 5 is a schematic flowchart of a method for identifying people in a vehicle according to a third embodiment of the present application;
fig. 6 is a schematic diagram illustrating a principle of identifying a left-right position relationship of an occupant in a vehicle according to a third embodiment of the present application;
fig. 7 is an interaction schematic diagram of an in-vehicle person identification method provided in the third embodiment of the present application;
fig. 8 is a schematic structural diagram of an in-vehicle passenger identification device according to a fourth embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to a fifth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The method for identifying the person in the vehicle provided by the embodiment of the application can be applied to terminal devices such as a notebook computer, a super-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a server and the like, and the embodiment of the application does not limit the specific type of the terminal device.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 1, a schematic flow chart of an in-vehicle person identification method provided in an embodiment of the present application is shown, where the in-vehicle person identification method may be applied to a terminal device, and as shown in the drawing, the in-vehicle person identification method may include the following steps:
step S101, a vehicle image is acquired.
The vehicle image is a picture which is recorded and shot by image acquisition equipment such as road monitoring equipment and contains at least one vehicle; the terminal device can directly obtain the vehicle image from road monitoring, and the terminal device can also call the vehicle image from a storage device for storing the vehicle image.
And S102, identifying a vehicle area and a person area in the vehicle image.
The vehicle area may include, among other things, a body area, a windshield area, etc. The method comprises the following steps that a vehicle identification technology can be used for marking a region of a vehicle in a vehicle image, a rectangular frame is used for surrounding a vehicle body, and the region inside the rectangular frame is a vehicle body region; the vehicle identification technology can be used for marking the front windshield of the vehicle in the vehicle image, and a rectangular frame is used for surrounding the front windshield of the vehicle, and the area inside the rectangular frame is the windshield area. The person region may include a face region and/or a body region. When using the human body recognition technique, the human body region can be recognized from the whole image according to the human body characteristics. When the face recognition technology is used, the face region can be recognized from the whole image according to the face features.
In this embodiment, if the person region includes a face region and a body region, the number of the person regions is the number of the face region or the number of the body region, and the area of the person region is the area of the face region or the area of the body region.
Step S103, if the personnel area is in the vehicle area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle area.
The vehicle area and the personnel area are both range quantities, for example, the vehicle area and the personnel area are both rectangular frames, and if the rectangular frame of the vehicle area wraps the rectangular frame of the personnel area, the personnel area is in the vehicle area; if the rectangular frame of the vehicle area does not wrap the personnel area, the personnel area is not in the vehicle area; if the rectangular frame of the vehicle area does not completely wrap the rectangular frame of the personnel area, whether the personnel area is in the vehicle area can be judged according to the size of the wrapping area.
When the personnel area is in the vehicle area, if the area of the personnel area is far smaller than that of the vehicle area, the personnel area can be determined to be misdetected, therefore, in order to improve the accuracy, the misdetected personnel area is excluded, the embodiment of the application sets a proportional threshold, and when the proportional coefficient of the area of the personnel area and the area of the vehicle area is smaller than the proportional threshold, the personnel area is misdetected, so that the misjudgment rate is reduced. When the proportionality coefficient is larger than a threshold value, that is, the proportionality coefficient is close to 1, it is indicated that the area of the person region is almost the same as the area of the vehicle region, and the person region at that time may be detected as an error.
Optionally, the vehicle region is a vehicle body region, and if the personnel region is located in the vehicle region, acquiring a proportionality coefficient of the area of the personnel region and the area of the vehicle region includes:
and if the personnel area is located in the vehicle body area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle body area.
For example, referring to fig. 2, a vehicle image schematic diagram of an in-vehicle person identification method provided in an embodiment of the present application is shown in the figure, where a pixel point at an upper left corner of a vehicle image is used as an origin, a horizontal direction of the vehicle image is an x-axis, and a vertical direction of a vehicle is a y-axis, and a rectangular frame of a vehicle body region, a rectangular frame of a human face region, and a rectangular frame of a human body region are determined through vehicle identification and person identification; whereinThe rectangular box of the face region can be represented as (X)f,Yf,Wf,Hf) The smallest rectangular box as shown; the rectangular box of the body region may be represented as (X)b,Yb,Wb,Hb) As shown, the rectangle frame includes a face region rectangle frame; the rectangular box of the body region may be represented as (X)c,Yc,Wc,Hc) As shown, a rectangular frame substantially including a vehicle body is shown, wherein X, Y denotes x-axis and y-axis coordinates, W, H denotes width and height of the rectangular frame, subscript f denotes a human face, subscript b denotes a human body, and subscript c denotes a vehicle.
If the face area is located in the vehicle body area, the following formula needs to be satisfied:
Xf>Xcand Yf>Ycand(Xf+Wf)<(Xc+Wc)and(Yf+Hf)<(Yc+Hc)
if the human body area is located in the vehicle body area, the following formula needs to be satisfied:
Xb>Xcand Yb>Ycand(Xb+Wb)<(Xc+Wc)and(Yb+Hb)<(Yc+Hc)
if the human body area comprises the human face area and the human body area, the two formulas are simultaneously met, and then the person in the human body area can be judged to be the person in the vehicle.
Calculating the area of the rectangular frame according to the rectangular frame of the face region to represent the region area of the face region, and correspondingly obtaining the region area of the vehicle body region; the area of the face area is divided by the area of the body area to obtain a scaling factor, and the face area is limited in the body area, so that the value range of the scaling factor is [0,1], and the face area with the scaling factor smaller than a scaling threshold is used as false detection in the embodiment of the application.
Optionally, the vehicle region is a windshield region, and if the personnel region is located in the vehicle region, acquiring a proportionality coefficient of region areas of the personnel region and the vehicle region includes:
and if the personnel area is located in the windshield area, acquiring a proportionality coefficient of the area of the personnel area and the area of the windshield area.
Aiming at the situation that pedestrians and vehicles are dense in the snapshot pictures of the crossroads and the like, in order to eliminate the interference of special situations such as overlapping of the pedestrians and the vehicles, the vehicle area is limited to be the windshield area, and the accuracy of judgment of the people in the vehicle is improved.
For example, referring to fig. 3, another schematic diagram of a vehicle image for a method for identifying a person in a vehicle according to an embodiment of the present application is shown, and as shown in the figure, the rectangular frame of the windshield area may be represented as (X)w,Yw, Ww,Hw) Shown as a rectangular frame substantially enclosing the front windshield, where the subscript w denotes the windshield.
If the face area is within the windshield area, the following equation needs to be satisfied:
Xf>Xwand Yf>Ywand(Xf+Wf)<(Xw+Ww)and(Yf+Hf)<(Yw+Hw)
if the body area is within the windshield area, the following equation needs to be satisfied:
Xb>Xwand Yb>Ywand(Xb+Wb)<(Xw+Ww)and(Yb+Hb)<(Yw+Hw)
if the human body area comprises the human face area and the human body area, the two formulas are simultaneously met, and then the person in the human body area can be judged to be the person in the vehicle.
Calculating the area of the rectangular frame according to the rectangular frame of the face region to be used for representing the region area of the face region, and correspondingly obtaining the region area of the windshield region; the area of the face area is divided by the area of the windshield area to obtain a proportionality coefficient, the face area is limited to be in the windshield area, so that the value range of the proportionality coefficient is [0,1], and the face area with the proportionality coefficient smaller than a proportionality threshold value is used as false detection in the embodiment of the application.
And step S104, if the proportion coefficient is larger than a proportion threshold value, determining that the person corresponding to the person area is an in-vehicle person.
The in-vehicle personnel can refer to personnel in the vehicle, namely a driver and passengers of the vehicle, so that the in-vehicle personnel and the pedestrians are divided.
The image recognition technology is used for recognizing the vehicle region and the personnel region in the vehicle image, the personnel in the vehicle and the personnel outside the vehicle are determined according to the relation between the personnel region and the vehicle region, the personnel recognition and the vehicle recognition are combined, the classification of the personnel category in the image is realized, the personnel in the vehicle is prevented from being marked artificially, the efficiency is high, the abnormal personnel region is removed according to the size relation between the personnel region and the vehicle region, the misjudgment rate is reduced, the personnel and vehicle corresponding recognition work for monitoring a large number of video pictures is facilitated, for example, the vehicle corresponds to the driver of the vehicle, whether the driver is a vehicle owner or not is judged according to the information of the driver and the information of the vehicle, and whether the driver is driving without a license or not is judged.
Referring to fig. 4, a flowchart of an in-vehicle person identification method according to a second embodiment of the present application is shown, where the in-vehicle person identification method may be applied to a terminal device, and the in-vehicle person identification method may include the following steps:
in step S201, a vehicle image is acquired.
The vehicle image is a picture which is recorded and shot by image acquisition equipment such as road monitoring equipment and contains at least one vehicle; the terminal device can directly obtain the vehicle image from road monitoring, and the terminal device can also call the vehicle image from a storage device for storing the vehicle image.
And step S202, identifying a vehicle area and a person area in the vehicle image.
The vehicle area may include a vehicle body area, a windshield area, and the like, and the vehicle in the vehicle image is partitioned by using the existing vehicle identification technology, for example, a rectangular frame is used to surround the vehicle body, and the area inside the rectangular frame is the vehicle body area; the method includes the steps of using the existing vehicle identification technology to mark a front windshield of a vehicle in a vehicle image, for example, using a rectangular frame to divide the front windshield of the vehicle, wherein an area inside the rectangular frame is a windshield area. The personnel area can comprise a human face area and/or a human body area, when the existing human body recognition technology is used, the human body area can be recognized from the whole image according to human body characteristics, and when the existing human face recognition technology is used, the human face area can be recognized from the whole image according to the human face characteristics.
In this embodiment, if the person region includes a face region and a body region, the number of the person regions is the number of the face region or the number of the body region, and the area of the person region is the area of the face region or the area of the body region.
Step S203, if the personnel area is in the vehicle area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle area.
The vehicle area and the personnel area are both range quantities, for example, the vehicle area and the personnel area are both rectangular frames, and if the rectangular frame of the vehicle area wraps the rectangular frame of the personnel area, the personnel area is in the vehicle area; if the rectangular frame of the vehicle area does not wrap the personnel area, the personnel area is not in the vehicle area; if the rectangular frame of the vehicle area does not completely wrap the rectangular frame of the personnel area, whether the personnel area is in the vehicle area can be judged according to the size of the wrapping area.
Optionally, the vehicle region is a vehicle body region, and if the personnel region is located in the vehicle region, acquiring a proportionality coefficient of the area of the personnel region and the area of the vehicle region includes:
and if the personnel area is located in the vehicle body area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle body area.
Optionally, the vehicle region is a windshield region, and if the personnel region is located in the vehicle region, acquiring a proportionality coefficient of region areas of the personnel region and the vehicle region includes:
and if the personnel area is located in the windshield area, acquiring a proportionality coefficient of the area of the personnel area and the area of the windshield area.
And S204, if the proportion coefficient is larger than a proportion threshold value, determining that the person corresponding to the person area is an in-vehicle person.
The in-vehicle personnel can refer to personnel in the vehicle, namely a driver and passengers of the vehicle, so that the in-vehicle personnel and the pedestrians are divided.
And step S205, acquiring the number of the personnel areas in the vehicle area.
Determining a person region in a vehicle region according to the method in the first embodiment of the application, and counting the number of the part of the person regions, for example, if the vehicle region is a vehicle body region, the person region is a human face region, and only one human face region exists in the vehicle body region, the number of the person regions in the vehicle region is 1; the vehicle area is a windshield area, the person area is a face area, and if two face areas exist in the windshield area, the number of the person areas in the vehicle area is 2.
If the person areas are face areas and human body areas, because the human body areas generally include the face areas, the number of the face areas or the number of the human body areas in the vehicle area is only counted during counting.
In step S206, if the number of passenger regions in the vehicle region is 1, it is determined that the passenger corresponding to the passenger region is a front passenger.
The front-row passengers represent passengers at two positions in the front row in the vehicle, and since most of the images acquired by image acquisition equipment such as monitoring equipment are images in the running state of the vehicle, if the vehicle is in a form state, only one personnel area is in the vehicle area, and the driver, namely the front-row passenger, can be judged.
Step S207, if the number of the person areas in the vehicle area is greater than or equal to 2, acquiring an area of each person area.
The area of the person region may be an area of a rectangular frame corresponding to the person region, and the area of the rectangular frame may be calculated by multiplying a width of the rectangular frame by a height of the rectangular frame, that is, by S ═ W × H.
In step S208, a first person region having the largest area and a second person region having the second largest area are determined.
The maximum area of the region indicates that the area of the rectangular frame in the person region in the vehicle region is the maximum, and the second maximum area of the region indicates that the area of the rectangular frame in the person region in the vehicle region is smaller than the maximum area of the rectangular frame and larger than the areas of the rectangular frames in other person regions in the vehicle region. For example, the areas of the rectangular frames of the personnel areas in the vehicle area are sorted from large to small, the personnel areas corresponding to the first two rectangular frames are selected, the first personnel area is the first personnel area with the largest rectangular frame area, and the area of the first personnel area is S1The second person area is the second largest area of the rectangular frame, and the area of the second person area is S2
Step S209, determining whether the area ratio of the first person region to the second person region exceeds a preset ratio.
The ratio of the area of the first person area to the area of the second person area is obtained, and the formula is as follows:
r=(1-S2/S1)X100%
wherein, the larger the r value is, the larger the difference between the areas of the two regions is, and the smaller the r value is, the smaller the difference between the areas of the two regions is.
Because the area difference between the human body or the human face of the back row passenger and the human body or the human face of the front row passenger is larger, the area difference between the human body or the human face of the passenger in the same row is smaller. Therefore, the embodiment of the present application can determine a preset ratio R through statistical analysis.
And step S210, if the area ratio of the region exceeds a preset ratio, determining that the person corresponding to the first person region is a front passenger.
When R is greater than R, represents S1And S2A large difference of S1The person in the corresponding person area is a front passenger, S2The person in the corresponding person region is a rear passenger.
And S211, if the area ratio of the region does not exceed a preset ratio, determining that the persons corresponding to the first person region and the second person region are front row passengers.
When R is less than R, represents S1And S2With a small phase difference, S1、S2And all the persons in the corresponding person areas are front passengers.
The method comprises the steps that an image recognition technology is used for recognizing a vehicle area and a personnel area in a vehicle image, and further analyzing personnel conditions in the vehicle area, so that whether a passenger is a front-row passenger or a rear-row passenger is determined; the number of the personnel areas in the vehicle area can be used for dividing the personnel in only one personnel area into drivers, the division of the personnel areas is realized according to the personnel area difference of front row passengers and rear row passengers by calculating the area of the personnel areas in the vehicle area, the realization mode is simple, conditions are provided for the identification of subsequent drivers by analyzing the front row passengers and the rear row passengers, and the identification of the violation conditions of the front row passengers is facilitated.
Referring to fig. 5, a flowchart of an in-vehicle person identification method according to a third embodiment of the present application is shown, where the in-vehicle person identification method may be applied to a terminal device, and the in-vehicle person identification method may include the following steps:
in step S301, a vehicle image is acquired.
And step S302, identifying a vehicle area and a person area in the vehicle image.
Step S303, if the personnel area is in the vehicle area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle area.
And step S304, if the proportion coefficient is larger than a proportion threshold value, determining that the person corresponding to the person area is an in-vehicle person.
Step S305, acquiring the number of the personnel areas in the vehicle area.
In step S306, if the number of the passenger regions in the vehicle region is 1, it is determined that the passenger corresponding to the passenger region is a front passenger.
Step S307, if the number of the personnel areas in the vehicle area is greater than or equal to 2, acquiring the area of each personnel area.
In step S308, a first person region having the largest area and a second person region having the second largest area are determined.
Step S309, determining whether the area ratio of the first person region to the second person region exceeds a preset ratio.
Step S310, if the area ratio of the region exceeds a preset ratio, determining that the person corresponding to the first person region is a front passenger.
Step S311, if the area ratio of the region does not exceed a preset ratio, determining that the persons corresponding to the first person region and the second person region are front passengers.
Step S301 to step S311 are the same as step S201 to step S211, and reference may be specifically made to the related description of step S201 to step S211, which is not described herein again.
Step S312, left and right rudder information of the vehicle in the vehicle image is acquired.
In recognizing the vehicle image, the vehicle recognition technology can also recognize left and right rudder information, vehicle type information, and the like of the vehicle, for example, recognize a license plate number of the vehicle, and can match the left and right rudder information of the vehicle according to the license plate number.
And step 313, determining a personnel area where the driver is located from the personnel areas corresponding to the front passengers according to the left and right rudder information.
When the heading of the vehicle in the vehicle image is towards the lower side, if the vehicle is in a left rudder, the driver may be in the front-row passenger on the right side of the vehicle image; if the vehicle is a right rudder, the driver may be on the left side of the vehicle image among the front occupants. The passenger area where the driver is located is determined according to the passenger areas corresponding to the front passengers, and a mode of comparing the left-right position relationship of the passenger areas corresponding to the two front passengers or a mode of comparing the position relationship of the passenger areas corresponding to the two front passengers and the vertical center line of the vehicle area can be adopted.
For example, as shown in fig. 6, which is a schematic diagram of the recognition of the left-right positional relationship of the vehicle interior person in the third embodiment of the present application, the person region where the driver is located is determined according to the left-right positional relationship between the vertical center line of the rectangular frame of the first person region and the vertical center line of the rectangular frame of the second person region.
Specifically, the difference between the coordinate values of the vertical center line of the rectangular frame of the face region and the coordinate values of the vertical center line of the rectangular frame of the vehicle region in the x-axis direction is calculated:
rfc=(Xf+Wf/2)-(Xc+Wc/2)
if r isfc>0, the person in the face area is on the left side of the vehicle; if r isfc<0, the person in the face area is on the right side of the vehicle; in the figure, Xf+Wf[ 2 ] vertical center line of rectangular frame of face region, Xc+WcAnd/2 represents the vertical centerline of the rectangular frame of the vehicle zone.
Calculating the difference between the coordinate values of the vertical center line of the rectangular frame of the human body area and the coordinate values of the vertical center line of the rectangular frame of the vehicle area in the x-axis direction:
rbc=(Xb+Wb/2)-(Xc+Wc/2)
if r isbc>0, the person in the human body area is on the left side of the vehicle; if r isbc<0, then the person in that body region is to the right of the vehicle.
Similarly, the person corresponding to the face area or the body area can be determined to be on the left side or the right side of the vehicle by calculating the difference between the coordinate values of the vertical center line of the rectangular frame of the face area or the body area and the coordinate values of the vertical center line of the rectangular frame of the windshield area in the x-axis direction, and the formula is as follows:
calculating the difference between the coordinate values of the vertical center line of the rectangular frame of the face area or the human body area and the coordinate values of the vertical center line of the rectangular frame of the windshield area in the x-axis direction:
rfw=(Xf+Wf/2)-(Xw+Ww/2)
if r isfw>0, the person in the face area is on the left side of the vehicle; if r isfw<0, then the person in the face area is on the right side of the vehicle.
Calculating the difference between the coordinate values of the vertical center line of the rectangular frame of the human body area and the coordinate values of the vertical center line of the rectangular frame of the windshield area in the x-axis direction:
rbw=(Xb+Wb/2)-(Xw+Ww/2)
if r isbw>0, the person in the human body area is on the left side of the vehicle; if r isbw<0, then the person in that body region is to the right of the vehicle.
Optionally, the method for identifying a person in the vehicle further includes:
acquiring vehicle type information of a vehicle in the vehicle image;
after the determining the region of the driver from the region of the front passenger according to the left and right rudder information, the method further includes:
identifying face information in a person area where a driver is located;
acquiring driver license information corresponding to the face information;
and if the driving license information contains the driving license type and the vehicle type information is not matched, outputting preset prompt information.
The method includes the steps of obtaining vehicle type information according to a vehicle image, identifying face information of a driver after the driver is determined from front-row passengers, obtaining driver license information corresponding to the face information through face information matching, and obtaining comparison between a driving-permitted vehicle type of the driver in the driver license information and the vehicle type information, wherein for example, the driving-permitted vehicle type of the driver is C1, the vehicle type in the vehicle image is A1, obviously, the driver does not have qualification for driving the A1 vehicle type, preset prompt information such as alarm is output, and information such as a ticket can be automatically generated.
Referring to fig. 7, an interaction schematic diagram of an in-vehicle person identification method provided in the third embodiment of the present application is shown, where the in-vehicle person identification method is applied to a terminal device, a vehicle image is collected by a monitoring system, the monitoring system transmits the collected vehicle image to the terminal device, and the terminal device obtains driver's license information, vehicle type information of a vehicle, and the like in a traffic management system according to a driver face identification result, where the traffic management system may be a system in which driver's license information corresponding to the driver's face information and vehicle type information corresponding to a vehicle license plate are stored, and finally compares a quasi-driving vehicle type in the driver's license information with the vehicle type information, and if the information is not matched, outputs preset prompt information.
The method comprises the steps that an image recognition technology is used for recognizing a vehicle area and a personnel area in a vehicle image, and further analyzing personnel conditions in the vehicle area, so that whether passengers are front-row passengers or rear-row passengers is determined, the positions of the front-row passengers are analyzed, and information of a driver is obtained by combining left and right rudder information of a vehicle; when the number of the front row passengers is two, the position relation between the vertical center line of the rectangular frame of the personnel area where the front row passengers are located and the vertical center line of the rectangular frame of the vehicle area is analyzed, the personnel area corresponding to the driver is determined according to left and right rudder information of the vehicle, the adopted judgment method is simple, automatic identification of conditions such as violation of regulations of the driver is facilitated, the processing efficiency of vehicle images is improved, the cost of manually identifying the driver is reduced, and the traffic supervision capacity is improved.
Fig. 8 shows a block diagram of an in-vehicle occupant recognition apparatus according to a fourth embodiment of the present application, which corresponds to the in-vehicle occupant recognition method according to the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 8, the in-vehicle person recognition apparatus includes:
an image acquisition module 81 for acquiring a vehicle image;
the region identification module 82 is used for identifying a vehicle region and a person region in the vehicle image, wherein the person region comprises a human face region and/or a human body region;
a proportionality coefficient obtaining module 83, configured to obtain a proportionality coefficient between the person region and a region area of the vehicle region if the person region is in the vehicle region;
and the in-vehicle personnel judgment module 84 is configured to judge that the personnel corresponding to the personnel area is the in-vehicle personnel if the proportionality coefficient is greater than a proportionality threshold.
Optionally, the vehicle region is a vehicle body region, and the proportionality coefficient obtaining module 83 is specifically configured to:
and if the personnel area is located in the vehicle body area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle body area.
Optionally, the vehicle area is a windshield area, and the proportionality coefficient obtaining module 83 is specifically configured to:
and if the personnel area is located in the windshield area, acquiring a proportionality coefficient of the area of the personnel area and the area of the windshield area.
Optionally, the in-vehicle person identification apparatus further includes:
and the single person judgment module is used for judging that the person corresponding to the person area is the front passenger if the number of the person areas in the vehicle area is 1.
Optionally, the in-vehicle person identification apparatus further includes:
the multi-person judgment module is used for acquiring the area of each person area and determining a first person area with the largest area and a second person area with the second largest area if the number of the person areas in the vehicle area is greater than or equal to 2;
judging whether the area ratio of the first personnel area to the second personnel area exceeds a preset ratio or not;
if the area ratio of the region exceeds a preset ratio, determining that the person corresponding to the first person region is a front passenger;
and if the area ratio does not exceed the preset ratio, judging that the persons corresponding to the first person area and the second person area are front passengers.
Optionally, the in-vehicle person identification apparatus further includes:
the left and right rudder information acquisition module is used for acquiring left and right rudder information of the vehicle in the vehicle image;
and the driver information determining module is used for determining a personnel area where the driver is located from the personnel areas corresponding to the front passengers according to the left and right rudder information.
Optionally, the in-vehicle person identification apparatus further includes:
the vehicle type information acquisition module is used for acquiring vehicle type information of a vehicle in the vehicle image;
the face information identification module is used for identifying face information in a personnel area where the driver is located after the personnel area where the driver is located is determined from the personnel area corresponding to the front passenger according to the left and right rudder information;
the driving license information acquisition module is used for acquiring driving license information corresponding to the face information;
and the prompt information output module is used for outputting preset prompt information if the driving license information contains the driving license type and the vehicle type information is not matched.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules are based on the same concept as that of the embodiment of the method of the present application, specific functions and technical effects thereof may be specifically referred to a part of the embodiment of the method, and details are not described here.
Fig. 9 is a schematic structural diagram of a terminal device according to a fifth embodiment of the present application. As shown in fig. 9, the terminal device 9 of this embodiment includes: at least one processor 90 (only one shown in fig. 9), a memory 91, and a computer program 92 stored in the memory 91 and executable on the at least one processor 90, the steps in any of the various vehicle occupant identification method embodiments described above being implemented by the processor 90 when the computer program 92 is executed by the processor.
The terminal device 9 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 90, a memory 91. Those skilled in the art will appreciate that fig. 9 is only an example of the terminal device 9, and does not constitute a limitation to the terminal device 9, and may include more or less components than those shown, or combine some components, or different components, for example, and may further include an input/output device, a network access device, and the like.
The Processor 90 may be a Central Processing Unit (CPU), and the Processor 90 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may in some embodiments be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The memory 91 may also be an external storage device of the terminal device 9 in other embodiments, such as a plug-in hard disk provided on the terminal device 9, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device. The memory 91 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 91 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying the computer program code, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
When the computer program product runs on a terminal device, the terminal device can implement the steps in the method embodiments.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An in-vehicle person identification method, characterized by comprising:
acquiring a vehicle image;
identifying a vehicle region and a personnel region in the vehicle image, wherein the personnel region comprises a human face region and/or a human body region;
if the personnel area is in the vehicle area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle area;
and if the proportion coefficient is larger than a proportion threshold value, determining that the person corresponding to the person area is the person in the vehicle.
2. The in-vehicle person identification method according to claim 1, wherein the vehicle region is a vehicle body region, and the obtaining a proportionality coefficient of a region area of the person region and the vehicle region if the person region is in the vehicle region includes:
and if the personnel area is located in the vehicle body area, acquiring a proportional coefficient of the area areas of the personnel area and the vehicle body area.
3. The in-vehicle occupant identification method according to claim 1, wherein the vehicle region is a windshield region, and the obtaining a proportionality coefficient of a region area of the occupant region and the vehicle region if the occupant region is within the vehicle region includes:
and if the personnel area is located in the windshield area, acquiring a proportionality coefficient of the area of the personnel area and the area of the windshield area.
4. The in-vehicle occupant identification method according to claim 1, further comprising:
and if the number of the personnel areas in the vehicle area is 1, determining that the personnel corresponding to the personnel area is a front passenger.
5. The in-vehicle occupant identification method according to any one of claims 1 to 4, further comprising:
if the number of the personnel areas in the vehicle area is greater than or equal to 2, acquiring the area of each personnel area, and determining a first personnel area with the largest area and a second personnel area with the second largest area;
judging whether the area ratio of the first personnel area to the second personnel area exceeds a preset ratio or not;
if the area ratio of the region exceeds a preset ratio, determining that the person corresponding to the first person region is a front passenger;
and if the area ratio does not exceed the preset ratio, judging that the persons corresponding to the first person area and the second person area are front passengers.
6. The in-vehicle occupant identification method according to claim 5, further comprising:
acquiring left and right rudder information of the vehicle in the vehicle image;
and determining the personnel area where the driver is located from the personnel areas corresponding to the front passengers according to the left and right rudder information.
7. The in-vehicle occupant identification method according to claim 6, further comprising:
acquiring vehicle type information of a vehicle in the vehicle image;
after the determining the region of the driver from the region of the front passenger according to the left and right rudder information, the method further includes:
identifying face information in a person area where a driver is located;
acquiring driver license information corresponding to the face information;
and if the driving license information contains the driving license type and the vehicle type information is not matched, outputting preset prompt information.
8. An in-vehicle person recognition device, characterized by comprising:
the image acquisition module is used for acquiring a vehicle image;
the region identification module is used for identifying a vehicle region and a personnel region in the vehicle image, wherein the personnel region comprises a human face region and/or a human body region;
the proportion coefficient acquisition module is used for acquiring the proportion coefficient of the area areas of the personnel area and the vehicle area if the personnel area is in the vehicle area;
and the in-vehicle personnel judgment module is used for judging that the personnel corresponding to the personnel area is the in-vehicle personnel if the proportion coefficient is larger than a proportion threshold value.
9. A terminal device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the in-vehicle person identification method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the in-vehicle occupant identification method according to any one of claims 1 to 7.
CN201911406966.4A 2019-12-31 2019-12-31 Method and device for identifying people in vehicle, terminal equipment and medium Active CN111191603B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911406966.4A CN111191603B (en) 2019-12-31 2019-12-31 Method and device for identifying people in vehicle, terminal equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911406966.4A CN111191603B (en) 2019-12-31 2019-12-31 Method and device for identifying people in vehicle, terminal equipment and medium

Publications (2)

Publication Number Publication Date
CN111191603A true CN111191603A (en) 2020-05-22
CN111191603B CN111191603B (en) 2023-04-18

Family

ID=70707795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911406966.4A Active CN111191603B (en) 2019-12-31 2019-12-31 Method and device for identifying people in vehicle, terminal equipment and medium

Country Status (1)

Country Link
CN (1) CN111191603B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949559A (en) * 2020-10-23 2021-06-11 深圳巴士集团股份有限公司 Pedestrian gift detection method and device and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204547917U (en) * 2015-04-15 2015-08-12 郭帅 The overcrowding detection alarm system of a kind of car
CN107953828A (en) * 2016-10-14 2018-04-24 株式会社万都 The pedestrian recognition method of vehicle and pedestrian's identifying system of vehicle
CN107991677A (en) * 2017-11-28 2018-05-04 广州汽车集团股份有限公司 A kind of pedestrian detection method
CN108537140A (en) * 2018-03-20 2018-09-14 浙江鼎奕科技发展有限公司 Occupant's recognition methods and system
CN109284669A (en) * 2018-08-01 2019-01-29 辽宁工业大学 Pedestrian detection method based on Mask RCNN
CN110348332A (en) * 2019-06-24 2019-10-18 长沙理工大学 The inhuman multiple target real-time track extracting method of machine under a kind of traffic video scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204547917U (en) * 2015-04-15 2015-08-12 郭帅 The overcrowding detection alarm system of a kind of car
CN107953828A (en) * 2016-10-14 2018-04-24 株式会社万都 The pedestrian recognition method of vehicle and pedestrian's identifying system of vehicle
CN107991677A (en) * 2017-11-28 2018-05-04 广州汽车集团股份有限公司 A kind of pedestrian detection method
CN108537140A (en) * 2018-03-20 2018-09-14 浙江鼎奕科技发展有限公司 Occupant's recognition methods and system
CN109284669A (en) * 2018-08-01 2019-01-29 辽宁工业大学 Pedestrian detection method based on Mask RCNN
CN110348332A (en) * 2019-06-24 2019-10-18 长沙理工大学 The inhuman multiple target real-time track extracting method of machine under a kind of traffic video scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949559A (en) * 2020-10-23 2021-06-11 深圳巴士集团股份有限公司 Pedestrian gift detection method and device and terminal equipment

Also Published As

Publication number Publication date
CN111191603B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110855976B (en) Camera abnormity detection method and device and terminal equipment
WO2021098657A1 (en) Video detection method and apparatus, terminal device, and readable storage medium
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN111368612B (en) Overguard detection system, personnel detection method and electronic equipment
KR101834838B1 (en) System and method for providing traffic information using image processing
CN113947892B (en) Abnormal parking monitoring method and device, server and readable storage medium
CN110443245B (en) License plate region positioning method, device and equipment in non-limited scene
CN110909718B (en) Driving state identification method and device and vehicle
CN112749622B (en) Emergency lane occupation recognition method and device
CN112818839A (en) Method, device, equipment and medium for identifying violation behaviors of driver
CN113888860A (en) Method and device for detecting abnormal running of vehicle, server and readable storage medium
CN111191603B (en) Method and device for identifying people in vehicle, terminal equipment and medium
CN113408364B (en) Temporary license plate recognition method, system, device and storage medium
CN106845393A (en) Safety belt identification model construction method and device
CN112183206B (en) Traffic participant positioning method and system based on road side monocular camera
CN115019242B (en) Abnormal event detection method and device for traffic scene and processing equipment
CN109360137B (en) Vehicle accident assessment method, computer readable storage medium and server
CN115965636A (en) Vehicle side view generating method and device and terminal equipment
CN115984786A (en) Vehicle damage detection method and device, terminal and storage medium
CN115880632A (en) Timeout stay detection method, monitoring device, computer-readable storage medium, and chip
CN112950961B (en) Traffic flow statistical method, device, equipment and storage medium
CN113129597B (en) Method and device for identifying illegal vehicles on motor vehicle lane
CN116152790B (en) Safety belt detection method and device
CN114627651B (en) Pedestrian protection early warning method and device, electronic equipment and readable storage medium
CN115631477B (en) Target identification method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant