CN117078735A - Height detection method, system, electronic device and storage medium - Google Patents

Height detection method, system, electronic device and storage medium Download PDF

Info

Publication number
CN117078735A
CN117078735A CN202311024297.0A CN202311024297A CN117078735A CN 117078735 A CN117078735 A CN 117078735A CN 202311024297 A CN202311024297 A CN 202311024297A CN 117078735 A CN117078735 A CN 117078735A
Authority
CN
China
Prior art keywords
height
area
target
users
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311024297.0A
Other languages
Chinese (zh)
Inventor
曾庆宁
胡建良
黄欢
周鹭莹
黄嘉杰
李嘉杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Grg Intelligent Technology Solution Co ltd
Original Assignee
Grg Intelligent Technology Solution Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Grg Intelligent Technology Solution Co ltd filed Critical Grg Intelligent Technology Solution Co ltd
Priority to CN202311024297.0A priority Critical patent/CN117078735A/en
Publication of CN117078735A publication Critical patent/CN117078735A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computers, and provides a height detection method, a height detection system, electronic equipment and a storage medium, wherein the height detection method comprises the following steps: extracting video frames of the RGB video data and the depth video data to obtain an RGB image and a depth image to be processed, and performing pixel point alignment compensation on the RGB image and the depth image to be processed to obtain a target depth image; acquiring a first detection frame of a user head in a target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame; calculating a first target depth value according to the depth values of all the pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value and the first target depth value; and determining a position area where the user is located when the video data are acquired, and carrying out height correction on the first estimated height value according to the position area to obtain a target height value. According to the invention, adults and children are accurately distinguished, and meanwhile, the height correction is carried out according to the position area, so that the potential safety hazard is reduced.

Description

Height detection method, system, electronic device and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a height detection method, a height detection system, electronic equipment and a storage medium.
Background
At present, the subway gate passing identification control technology mainly relies on an infrared correlation sensor to detect and track the passing behavior of passengers in a passage, and the identification technology cannot identify adults and children, so that after adults pass, the children cannot normally pass, or potential safety hazards such as clamping people are easily caused when adults wrap the children to pass.
Disclosure of Invention
The embodiment of the invention provides a height detection method, a height detection system, electronic equipment and a storage medium, aiming at reducing potential safety hazards.
In a first aspect, an embodiment of the present invention provides a height detection method, including:
extracting video frames from the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and performing pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
acquiring a first detection frame of a user head in the target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
calculating a first target depth value according to the depth values of all pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value;
And determining a position area where the user is located when the video data are acquired, and correcting the height of the first estimated height value according to the position area to obtain a target height value.
In one embodiment, the performing height correction on the first estimated height value according to the location area to obtain a target height value includes:
if the position area is determined to be in the first area, inputting the first estimated height value into a height calibration model, and obtaining the height value output by the height calibration model; the first area is a passing area in the card swiping gate; the height calibration model is obtained by training according to the estimated height value of the user in the first area and the corresponding actual height value;
and determining the height value output by the height calibration model as the target height value.
In one embodiment, the specific steps of the training process of the height calibration model include:
acquiring user height values of a plurality of users;
establishing a detection coordinate system by taking the right center position of the first area right below the acquisition equipment as a coordinate system zero point;
calculating second estimated height values of a plurality of users at different positions in the first area;
Generating a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area;
and carrying out regression equation processing by taking feature lists of a plurality of users at different positions in the first area as fitting data to obtain the height calibration model.
In one embodiment, the generating the feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area includes:
acquiring second detection frames of the heads of the users in the depth images after the pixel points are aligned and compensated at different positions in the first area, and acquiring first center point position coordinates of second center points of the second detection frames;
acquiring a second target pixel point set in the vicinity of a second center point at different positions in the first region, and calculating second target depth values of a plurality of users at different positions in the first region according to the depth values of all the pixels in the second target pixel point set at different positions in the first region;
Calculating second estimated height values of a plurality of users at different positions in the first area according to the equipment height value and second target depth values at different positions in the first area;
converting the first central point position coordinate into a second central point position coordinate under the detection coordinate system;
and generating a feature list of the plurality of users at different positions in the first area according to the position coordinates of the second center point, the length and the width of the second detection frame, the height value of the users and the second estimated height value of the plurality of users at different positions in the first area.
In one embodiment, the performing height correction on the first estimated height value according to the location area to obtain a target height value includes:
if the position area is determined to be in the second area, acquiring a height calibration coefficient; the second area is a gate card swiping area;
and calculating according to the first estimated height value and the height calibration coefficient to obtain the target height value.
In one embodiment, the specific step of calculating the height calibration factor includes:
acquiring user height values of a plurality of users;
acquiring a third detection frame of the user head in the depth image after the pixel point alignment compensation of a plurality of users in the second area, and acquiring a third target pixel point set in a third center point adjacent area of the third detection frame;
Calculating a third target depth value of the plurality of users according to the depth value of each pixel point in the third target pixel point set of the plurality of users, and calculating a third estimated height value of the plurality of users according to the equipment height value and the third target depth value of the plurality of users;
calculating the calibration coefficients of the plurality of users according to the user height values of the plurality of users and the third estimated height values of the plurality of users, and carrying out average value calculation on the calibration coefficients of the plurality of users to obtain the height calibration coefficients.
In one embodiment, after the correcting the first estimated height value according to the position area to obtain the target height value, the method further includes:
carrying out identity recognition on the user to obtain identity ID information of the user;
binding the identity ID information of the user and the target height value of the user to obtain the label information of the user, and carrying out target tracking on the user according to the label information of the user.
In a second aspect, an embodiment of the present invention provides a height detection system, including:
the image processing module is used for extracting video frames of the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and carrying out pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
The acquisition module is used for acquiring a first detection frame of the user head in the target depth image and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
the height estimation module is used for calculating a first target depth value according to the depth value of each pixel point in the first target pixel point set and calculating a first estimated height value according to the first equipment height value of the acquisition equipment and the target depth value;
the height correction module is used for determining a position area where a user is located when video data are collected, and correcting the height of the first estimated height value according to the position area to obtain a target height value.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor implements the height detection method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer readable storage medium, including a computer program, which when executed by a processor implements the height detection method according to the first aspect.
According to the height detection method, the height detection system, the electronic equipment and the storage medium, video frame extraction is carried out on the collected RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and pixel point alignment compensation is carried out on the RGB image and the depth image to be processed to obtain the target depth image; acquiring a first detection frame of a user head in a target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame; calculating a first target depth value according to the depth values of all the pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value; and determining a position area where the user is located when the video data are acquired, and carrying out height correction on the first estimated height value according to the position area to obtain a target height value.
In the process of height detection, an adult and a child are accurately distinguished through the RGB image and the depth image, and height correction is carried out according to the position area where the user is located, so that the height value detected finally is more accurate, the problems that the adult and the child are inaccurate in identification, the child carried by the adult cannot normally pass through or the child is clamped are solved, and potential safety hazards are reduced.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a height detection method provided by an embodiment of the invention;
fig. 2 is a schematic diagram of an overall illumination range of a camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a height detection system according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a height detection method according to an embodiment of the present invention. The embodiment of the invention provides a height detection method, which comprises the following steps:
step 101, extracting video frames from the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and performing pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
102, acquiring a first detection frame of a user head in the target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
step 103, calculating a first target depth value according to the depth values of all the pixels in the first target pixel set, and calculating a first estimated height value according to the device height value of the acquisition device and the first target depth value.
Step 104, determining a position area where the user is located when the video data are collected, and correcting the first estimated height value according to the position area to obtain a target height value.
It should be noted that, the height detection method provided by the embodiment of the invention uses the height detection system as an execution subject for illustration. The height detection system is a system based on image processing and computer vision technology, aims to accurately measure the height of a user from images or videos, and is mainly applied to scenes for calculating the height of the user through a depth camera, wherein the application scenes comprise, but are not limited to, subway passing scenes, station passing scenes and airport passing scenes.
It should be further noted that, the conventional depth information collecting device mainly includes a three-dimensional camera, and the three-dimensional camera includes, but is not limited to, a structured light camera, a Time-of-Flight (TOF) camera and a binocular stereo camera according to different imaging principles of the three-dimensional camera, where depth information obtained by the structured light camera is easily interfered by ambient light, depth information precision of the TOF camera on edges is poor, a visual field range of the binocular stereo camera is relatively limited, and the structured light camera, the TOF camera and the binocular stereo camera cannot maintain stable precision in all application scenes. Therefore, the embodiment of the invention adopts a depth (Red Green Blue Depth, RGBD) camera, and acquires Red Green Blue (RGB) video data and depth video data of a passing user through a gate channel by the RGBD camera of a single gate channel, wherein the RGBD camera is a camera combining a color image and a depth image acquisition function, and can acquire information of object surface colors and depth information of the distance camera by combining color image acquisition of three channels of Red, green and Blue and data of infrared or other depth sensors.
Specifically, the height detection system collects RGB video data and depth video data of a gate channel through an RGBD camera, extracts video frames of the collected RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and performs pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image, wherein the pixel point alignment compensation refers to adjustment performed in an image capturing or display device for solving the problem of pixel alignment difference between different image sensors or displays, and the pixel point alignment compensation can ensure consistency between images.
Further, the height detection system acquires a first detection frame of the user head in the target depth image, and acquires a first target pixel point set in the first center point field of the first detection frame, wherein the acquired first detection frame of the user head comprises, but is not limited to, a human head position detection frame and a cap position detection frame; the depth value of each pixel point in the first target pixel point set is larger than or equal to a first depth threshold value and smaller than or equal to a second depth threshold value, and the first depth threshold value and the second depth threshold value are comprehensively set according to an application scene and the RGBD camera mounting height.
Further, the height detection system calculates a first target depth value by performing mean value calculation on the depth values of all the pixels in the first target pixel set according to the depth values of all the pixels in the first target pixel set, and obtains a device height value of the acquisition device, namely, obtains a device height value of the RGBD camera, calculates a device height value of the acquisition device and a first target depth value according to the device height value of the acquisition device and the first target depth value, and calculates a first estimated height value.
In an embodiment, the depth value of each pixel is greater than or equal to the first depth threshold, and the first target pixel set smaller than or equal to the second depth threshold is C K A first target pixel point set C obtained through calculation K Is S K The device height value of the acquisition device is H, therefore, the first estimated height value h=h-S K
Further, the height detection system determines a position area where the user is located when the video data is collected, and performs height correction on the first estimated height value according to the position area to obtain a target height value for judging whether the user is an adult or a child.
According to the height detection method provided by the embodiment of the invention, video frame extraction is carried out on the collected RGB video data and depth video data to obtain an RGB image and a depth image to be processed of a target frame, and pixel point alignment compensation is carried out on the RGB image and the depth image to be processed to obtain the target depth image; acquiring a first detection frame of a user head in a target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame; calculating a first target depth value according to the depth values of all the pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value; and determining a position area where the user is located when the video data are acquired, and carrying out height correction on the first estimated height value according to the position area to obtain a target height value. In the process of height detection, an adult and a child are accurately distinguished through the RGB image and the depth image, and height correction is carried out according to the position area where the user is located, so that the height value detected finally is more accurate, the problems that the adult and the child are inaccurate in identification, the child carried by the adult cannot normally pass through or the child is clamped are solved, and potential safety hazards are reduced.
Further, the step 104 of correcting the height of the first estimated height according to the position area to obtain a target height value includes:
if the position area is determined to be in the first area, inputting the first estimated height value into a height calibration model, and obtaining the height value output by the height calibration model; the first area is a passing area in the card swiping gate; the height calibration model is obtained by training according to the estimated height value of the user in the first area and the corresponding actual height value;
and determining the height value output by the height calibration model as the target height value.
It should be noted that, the integral irradiation range of the RGBD camera includes a gate entrance area, a gate exit area and a gate channel, referring to fig. 2, fig. 2 is a schematic diagram of the integral irradiation range of the camera according to the embodiment of the present invention, in a subway gate application scenario, a card swiping area passing forward is a gate entrance area a, a card swiping area passing backward is a gate exit area C, and a gate channel passing through by a user is a gate passing area B. The depth information of the gate entering area a and the gate exiting area C has larger error, but the area ranges of the gate entering area a and the gate exiting area C are smaller, the user height variation range is smaller, and the area range of the gate passing area B is larger, so that the accuracy defect of the camera itself needs to be compensated by a compensation correction means for subway gate passing application scenes.
Specifically, if it is determined that the location area where the user is located when the video data is collected is the first area, the height detection system inputs the first estimated height value into the trained height calibration model, and obtains the target height value through the trained height calibration model, that is, the height detection system inputs the first estimated height value into the height calibration model, obtains the height value output by the height calibration model, and determines the height value output by the height calibration model as the target height value. The first area is a passing area in the card gate, that is, a gate passing area, as shown in fig. 2 as an area B, and the height calibration model is obtained by training according to the estimated height value of the user in the first area and the actual height value of the user.
According to the embodiment of the invention, the first estimated height value is input into the trained height calibration model in the first area, the height value output by the height calibration model is obtained and is determined as the target height value, the height of the user is accurately and stably predicted through the height calibration model, so that adults and children can be accurately distinguished, and meanwhile, the height correction is carried out according to the position area, so that the potential safety hazard is reduced.
Further, the specific steps of the training process of the height calibration model comprise:
acquiring user height values of a plurality of users;
establishing a detection coordinate system by taking the right center position of the first area right below the acquisition equipment as a coordinate system zero point;
calculating second estimated height values of a plurality of users at different positions in the first area;
generating a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area;
and carrying out regression equation processing by taking feature lists of a plurality of users at different positions in the first area as fitting data to obtain the height calibration model.
Specifically, the height detection system obtains user height values of a plurality of users, wherein the user height values of the plurality of users can be obtained in advance through a statistical recording mode. Further, the height detection system determines the right center position of the first area of the position directly below the acquisition device as a coordinate system zero point, that is, the right center position of the first area of the position directly below the RGBD camera as a coordinate system zero point, and establishes a detection coordinate system based on the coordinate system zero point.
Further, the height detection system obtains a second target pixel point set in a second center point field of different positions of the plurality of users in the first area, and calculates second estimated height values of the plurality of users in different positions of the first area according to second target depth values of different positions of the plurality of users in the first area.
Further, the height detection system generates a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area, the feature list of the plurality of users at different positions in the first area is determined to be fitting data, regression equation processing is carried out on the fitting data through a regression equation to obtain a height calibration model, and the target height value calibrated through the height calibration model is obtained, wherein the regression equation comprises but is not limited to a multiple linear regression equation.
According to the embodiment of the invention, the characteristic list is generated by detecting the coordinate system, the user height values of a plurality of users and the second estimated height values of the users at different positions in the first area, regression equation processing is carried out by taking the characteristic list as fitting data, the height calibration model is obtained, the height calibration model has stronger adaptability and accuracy by integrating various characteristic data for regression, the user height is accurately and stably predicted by the height calibration model, adults and children are accurately distinguished, and meanwhile, the height correction is carried out according to the position area, so that the potential safety hazard is reduced.
Further, the generating a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area includes:
acquiring second detection frames of the heads of the users in the depth images after the pixel points are aligned and compensated at different positions in the first area, and acquiring first center point position coordinates of second center points of the second detection frames;
acquiring a second target pixel point set in the vicinity of a second center point at different positions in the first region, and calculating second target depth values of a plurality of users at different positions in the first region according to the depth values of all the pixels in the second target pixel point set at different positions in the first region;
calculating second estimated height values of a plurality of users at different positions in the first area according to the equipment height value and second target depth values at different positions in the first area;
converting the first central point position coordinate into a second central point position coordinate under the detection coordinate system;
and generating a feature list of the plurality of users at different positions in the first area according to the position coordinates of the second center point, the length and the width of the second detection frame, the height value of the users and the second estimated height value of the plurality of users at different positions in the first area.
Specifically, the height detection system collects RGB video data and depth video data of different positions of a plurality of users in a first area through an RGBD camera, extracts video frames of the collected RGB video data and depth video data of different positions of the plurality of users in the first area, and performs pixel point alignment compensation on an extracted image to obtain depth images of the plurality of users in different positions in the first area after the pixel point alignment compensation.
Further, the height detection system obtains second detection frames of the user heads in the depth images after the pixel points are aligned and compensated at different positions of the plurality of users in the first area, and obtains first center point position coordinates of second center points of the second detection frames. Further, the height detection system obtains a second target pixel point set in the second center point field at different positions in the first area, wherein the depth value of each pixel point in the second target pixel point set is larger than or equal to the first depth threshold value and smaller than or equal to the second depth threshold value.
Further, the height detection system calculates the average value of the depth values of all the pixels in the second target pixel set according to the depth values of all the pixels in the second target pixel set, calculates second target depth values of a plurality of users at different positions in the first area, acquires the equipment height value of the acquisition equipment, calculates the equipment height value of the acquisition equipment and second target depth values of different positions in the first area according to the equipment height value of the acquisition equipment and the second target depth values of different positions in the first area, and calculates second estimated height values of a plurality of users at different positions in the first area.
Further, the height detection system converts the first center point position coordinates into second center point position coordinates in the detection coordinate system based on the constructed detection coordinate system.
Further, the height detection system combines the second center point position coordinates, the length and the width of the second detection frame, the user height value and the second estimated height value according to the second center point position coordinates, the length and the width of the second detection frame, the user height value and the second estimated height value of the plurality of users at different positions in the first area, and generates a feature list of the plurality of users at different positions in the first area.
In one embodiment, when a user is at a position in the first area, the position coordinates of the detection frame of the second detection frame of the user's head are (x, y, w, l), where x and y represent the abscissa of the top left vertex of the second detection frame relative to the top left corner fixed point of the depth image frame, and w and l represent the width and length of the second detection frame; the first center point position coordinate of the second center point of the second detection frame is (C x ,C y ) Then the first center point position coordinates (C x ,C y ) Conversion to on-detectionThe second center point position coordinate in the coordinate system is (C' x ,C’ y ) The method comprises the steps of carrying out a first treatment on the surface of the The second target pixel point set meeting the condition that the depth value of each pixel point is larger than or equal to the first depth threshold value and smaller than or equal to the second depth threshold value is C' K A second target pixel point set C 'obtained through calculation' K Is S' K The equipment height value of the acquisition equipment is H, and the second estimated height value H ' =H-S ' ' K The method comprises the steps of carrying out a first treatment on the surface of the The height value of the user is h * . Thus, the list of features of a user at a location within the first area is [ C ]' x ,C’ y ,w,l,h’]And (3) recording the height calibration model as f, and establishing a model: h is a * =f([C’ x ,C’ y ,w,l,h’]) Solving a model f, wherein h * And h' training the height calibration model to construct a trained height calibration model.
According to the embodiment of the invention, the position coordinates of the second center points of the plurality of users at different positions in the first area, the length and the width of the second detection frame, the height value of the user and the second estimated height value are obtained, the characteristic list of the plurality of users at different positions in the first area is generated, regression is carried out through comprehensive and various characteristic data, so that the height calibration model has stronger adaptability and accuracy, meanwhile, the problem of abnormal depth information caused by an imaging principle of the existing depth camera is solved through correction and compensation measures, and the problem of deviation of the obtained depth information caused by the view angle of the existing depth camera is solved.
Further, the step 104 of correcting the height of the first estimated height according to the position area to obtain a target height value includes:
if the position area is determined to be in the second area, acquiring a height calibration coefficient; the second area is a gate card swiping area;
and calculating according to the first estimated height value and the height calibration coefficient to obtain the target height value.
In particular, if it is determined that the user is located when the video data is collectedThe position area is a second area, and the height calibration coefficient is obtained by the height detection system, wherein the second area is a gate card swiping area, namely a gate entering area and a gate exiting area, such as an area A or an area C shown in fig. 2. Further, the height detection system calculates the first estimated height value and the height calibration coefficient according to the first estimated height value and the height calibration coefficient to obtain the target height value. In one embodiment, the height calibration factor is δ, the first estimated height value is h, and thus the target height value is h c =h+δ。
According to the embodiment of the invention, the first estimated height value and the height calibration coefficient are calculated by obtaining the height calibration coefficient in the second area to obtain the target height value, and the height of the user is accurately and stably predicted by the height calibration coefficient so as to accurately distinguish adults from children, and meanwhile, the height correction is carried out according to the position area, so that the potential safety hazard is reduced.
Further, the specific step of calculating the height calibration coefficient comprises the following steps:
acquiring user height values of a plurality of users;
acquiring a third detection frame of the user head in the depth image after the pixel point alignment compensation of a plurality of users in the second area, and acquiring a third target pixel point set in a third center point adjacent area of the third detection frame;
calculating a third target depth value of the plurality of users according to the depth value of each pixel point in the third target pixel point set of the plurality of users, and calculating a third estimated height value of the plurality of users according to the equipment height value and the third target depth value of the plurality of users;
calculating the calibration coefficients of the plurality of users according to the user height values of the plurality of users and the third estimated height values of the plurality of users, and carrying out average value calculation on the calibration coefficients of the plurality of users to obtain the height calibration coefficients.
Specifically, the height detection system obtains user height values for a plurality of users. Further, the height detection system collects RGB video data and depth video data of a plurality of users in the second area through the RGBD camera, extracts video frames of the collected RGB video data and depth video data of the plurality of users in different positions in the second area, and performs pixel point alignment compensation on the extracted images to obtain depth images of the plurality of users in the second area after the pixel point alignment compensation.
Further, the height detection system obtains a third detection frame of the user head in the depth image after the pixel points are aligned and compensated in the second area, and obtains a third target pixel point set in the third center point field of the third detection frame, wherein the depth value of each pixel point in the third target pixel point set is larger than or equal to the first depth threshold value and smaller than or equal to the second depth threshold value.
Further, the height detection system calculates the average value of the depth values of all the pixels in the third target pixel set according to the depth values of all the pixels in the third target pixel set, calculates the third target depth value of a plurality of users in the second area, acquires the equipment height value of the acquisition equipment, calculates the equipment height value of the acquisition equipment and the third target depth value of a plurality of users in the second area according to the equipment height value of the acquisition equipment and the third target depth value of the plurality of users in the second area, and calculates the third estimated height value of the plurality of users in the second area.
Further, the height detection system calculates the user height values of the plurality of users and the third estimated height values of the plurality of users in the second area according to the user height values of the plurality of users and the third estimated height values of the plurality of users in the second area, calculates the calibration coefficients of the plurality of users, calculates the average value of the calibration coefficients of the plurality of users, and calculates the height calibration coefficients. In one embodiment, the ith user height value of N users is The third estimated height value of the ith user in the second area is h i The calibration factor of the user is +.>Therefore, the height calibration coefficient is +.>
According to the embodiment of the invention, the calibration coefficients of a plurality of users are calculated through the user height values of the plurality of users and the calculated third estimated height values, the average value calculation is carried out on the calibration coefficients of the plurality of users to obtain the height calibration coefficients, the prediction error can be effectively corrected through the calculated calibration coefficients, the accuracy of height prediction is improved, meanwhile, the problem of abnormal depth information caused by an imaging principle of the existing depth camera is solved through correction and compensation measures, and the problem of deviation of acquired depth information caused by the view angle of the existing depth camera is solved.
Further, the step of correcting the height of the first estimated height according to the position area, after obtaining the target height, further includes:
carrying out identity recognition on the user to obtain identity ID information of the user;
binding the identity ID information of the user and the target height value of the user to obtain the label information of the user, and carrying out target tracking on the user according to the label information of the user.
Specifically, the height detection system performs height correction on the first estimated height value according to the position area, after obtaining the target height value, the height detection system performs identity recognition on the user to obtain identity (Identity Document, ID) information of the user, binds the identity ID information of the user and the target height value of the user to obtain label information of the user, and performs target tracking on the user according to the label information of the user to obtain the traffic situation of the user in the gate channel, wherein the traffic situation of the user in the gate channel comprises but is not limited to trailing, reverse running and normal traffic situation. Further, the height detection system controls the gate door action of the gate according to the traffic situation of the user in the gate channel.
According to the embodiment of the invention, the identification ID information of the user and the target height value thereof are bound to obtain the label information of the user, the target tracking is carried out on the user according to the label information, the business judgment analysis such as trailing, reverse running and normal passing is carried out, the passing condition of the user in the gate channel is obtained, the real-time tracking and passing behavior analysis of the user are realized, the safety and the management efficiency of the gate channel are improved, and the potential safety hazard is reduced.
The height detection system provided by the embodiment of the invention is described below, and the height detection system described below and the height detection method described above can be referred to correspondingly. Referring to fig. 3, fig. 3 is a schematic structural diagram of a height detection system according to an embodiment of the present invention, where the height detection system according to the embodiment of the present invention includes:
the image processing module 301 is configured to perform video frame extraction on the collected RGB video data and depth video data to obtain an RGB image and a depth image to be processed of a target frame, and perform pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
the acquiring module 302 is configured to acquire a first detection frame of a user's head in the target depth image, and acquire a first target pixel point set in a first center point vicinity of the first detection frame;
The height estimation module 303 is configured to calculate a first target depth value according to the depth values of the pixels in the first target pixel set, and calculate a first estimated height value according to the first device height value of the collection device and the target depth value;
the height correction module 304 is configured to determine a location area where the user is located when the video data is collected, and perform height correction on the first estimated height value according to the location area, so as to obtain a target height value.
The height detection system provided by the embodiment of the invention extracts video frames of the acquired RGB video data and depth video data to obtain an RGB image and a depth image to be processed of a target frame, and performs pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image; acquiring a first detection frame of a user head in a target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame; calculating a first target depth value according to the depth values of all the pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value; and determining a position area where the user is located when the video data are acquired, and carrying out height correction on the first estimated height value according to the position area to obtain a target height value. In the process of height detection, an adult and a child are accurately distinguished through the RGB image and the depth image, and height correction is carried out according to the position area where the user is located, so that the height value detected finally is more accurate, the problems that the adult and the child are inaccurate in identification, the child carried by the adult cannot normally pass through or the child is clamped are solved, and potential safety hazards are reduced.
In one embodiment, height correction module 304 is further to:
if the position area is determined to be in the first area, inputting the first estimated height value into a height calibration model, and obtaining the height value output by the height calibration model; the first area is a passing area in the card swiping gate; the height calibration model is obtained by training according to the estimated height value of the user in the first area and the corresponding actual height value;
and determining the height value output by the height calibration model as the target height value.
In one embodiment, height correction module 304 is further to:
acquiring user height values of a plurality of users;
establishing a detection coordinate system by taking the right center position of the first area right below the acquisition equipment as a coordinate system zero point;
calculating second estimated height values of a plurality of users at different positions in the first area;
generating a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area;
and carrying out regression equation processing by taking feature lists of a plurality of users at different positions in the first area as fitting data to obtain the height calibration model.
In one embodiment, height correction module 304 is further to:
acquiring second detection frames of the heads of the users in the depth images after the pixel points are aligned and compensated at different positions in the first area, and acquiring first center point position coordinates of second center points of the second detection frames;
acquiring a second target pixel point set in the vicinity of a second center point at different positions in the first region, and calculating second target depth values of a plurality of users at different positions in the first region according to the depth values of all the pixels in the second target pixel point set at different positions in the first region;
calculating second estimated height values of a plurality of users at different positions in the first area according to the equipment height value and second target depth values at different positions in the first area;
converting the first central point position coordinate into a second central point position coordinate under the detection coordinate system;
and generating a feature list of the plurality of users at different positions in the first area according to the position coordinates of the second center point, the length and the width of the second detection frame, the height value of the users and the second estimated height value of the plurality of users at different positions in the first area.
In one embodiment, height correction module 304 is further to:
if the position area is determined to be in the second area, acquiring a height calibration coefficient; the second area is a gate card swiping area;
and calculating according to the first estimated height value and the height calibration coefficient to obtain the target height value.
In one embodiment, height correction module 304 is further to:
acquiring user height values of a plurality of users;
acquiring a third detection frame of the user head in the depth image after the pixel point alignment compensation of a plurality of users in the second area, and acquiring a third target pixel point set in a third center point adjacent area of the third detection frame;
calculating a third target depth value of the plurality of users according to the depth value of each pixel point in the third target pixel point set of the plurality of users, and calculating a third estimated height value of the plurality of users according to the equipment height value and the third target depth value of the plurality of users;
calculating the calibration coefficients of the plurality of users according to the user height values of the plurality of users and the third estimated height values of the plurality of users, and carrying out average value calculation on the calibration coefficients of the plurality of users to obtain the height calibration coefficients.
In one embodiment, the height detection system is further configured to:
Carrying out identity recognition on the user to obtain identity ID information of the user;
binding the identity ID information of the user and the target height value of the user to obtain the label information of the user, and carrying out target tracking on the user according to the label information of the user.
The specific embodiment of the height detection system provided by the invention is basically the same as each embodiment of the height detection method, and is not described herein.
Fig. 4 illustrates a physical schematic diagram of an electronic device, as shown in fig. 4, which may include: processor 410, communication interface (Communication Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. The processor 410 may call a computer program in the memory 430 to perform the steps of the height detection method, including, for example:
extracting video frames from the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and performing pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
acquiring a first detection frame of a user head in the target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
Calculating a first target depth value according to the depth values of all pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value;
and determining a position area where the user is located when the video data are acquired, and correcting the height of the first estimated height value according to the position area to obtain a target height value.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, embodiments of the present invention further provide a non-transitory computer readable storage medium, where the non-transitory computer readable storage medium includes a computer program, where the computer program may be stored on the non-transitory computer readable storage medium, and when the computer program is executed by a processor, the computer program may be capable of executing the steps of the height detection method provided in the foregoing embodiments, for example, including:
extracting video frames from the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and performing pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
acquiring a first detection frame of a user head in the target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
calculating a first target depth value according to the depth values of all pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value;
and determining a position area where the user is located when the video data are acquired, and correcting the height of the first estimated height value according to the position area to obtain a target height value.
The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A height detection method, comprising:
extracting video frames from the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and performing pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
acquiring a first detection frame of a user head in the target depth image, and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
calculating a first target depth value according to the depth values of all pixels in the first target pixel set, and calculating a first estimated height value according to the equipment height value of the acquisition equipment and the first target depth value;
And determining a position area where the user is located when the video data are acquired, and correcting the height of the first estimated height value according to the position area to obtain a target height value.
2. The height detection method according to claim 1, wherein the performing height correction on the first estimated height value according to the location area to obtain a target height value comprises:
if the position area is determined to be in the first area, inputting the first estimated height value into a height calibration model, and obtaining the height value output by the height calibration model; the first area is a passing area in the card swiping gate; the height calibration model is obtained by training according to the estimated height value of the user in the first area and the corresponding actual height value;
and determining the height value output by the height calibration model as the target height value.
3. The height detection method according to claim 2, wherein the specific steps of the training process of the height calibration model include:
acquiring user height values of a plurality of users;
establishing a detection coordinate system by taking the right center position of the first area right below the acquisition equipment as a coordinate system zero point;
Calculating second estimated height values of a plurality of users at different positions in the first area;
generating a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users and the second estimated height values of the plurality of users at different positions in the first area;
and carrying out regression equation processing by taking feature lists of a plurality of users at different positions in the first area as fitting data to obtain the height calibration model.
4. The height detection method according to claim 3, wherein the generating a feature list of the plurality of users at different positions in the first area according to the detection coordinate system, the user height values of the plurality of users, and the second estimated height values of the plurality of users at different positions in the first area comprises:
acquiring second detection frames of the heads of the users in the depth images after the pixel points are aligned and compensated at different positions in the first area, and acquiring first center point position coordinates of second center points of the second detection frames;
acquiring a second target pixel point set in the vicinity of a second center point at different positions in the first region, and calculating second target depth values of a plurality of users at different positions in the first region according to the depth values of all the pixels in the second target pixel point set at different positions in the first region;
Calculating second estimated height values of a plurality of users at different positions in the first area according to the equipment height value and second target depth values at different positions in the first area;
converting the first central point position coordinate into a second central point position coordinate under the detection coordinate system;
and generating a feature list of the plurality of users at different positions in the first area according to the position coordinates of the second center point, the length and the width of the second detection frame, the height value of the users and the second estimated height value of the plurality of users at different positions in the first area.
5. The height detection method according to claim 1, wherein the performing height correction on the first estimated height value according to the location area to obtain a target height value comprises:
if the position area is determined to be in the second area, acquiring a height calibration coefficient; the second area is a gate card swiping area;
and calculating according to the first estimated height value and the height calibration coefficient to obtain the target height value.
6. The height detection method according to claim 5, wherein the specific step of calculating the height calibration factor comprises:
Acquiring user height values of a plurality of users;
acquiring a third detection frame of the user head in the depth image after the pixel point alignment compensation of a plurality of users in the second area, and acquiring a third target pixel point set in a third center point adjacent area of the third detection frame;
calculating a third target depth value of the plurality of users according to the depth value of each pixel point in the third target pixel point set of the plurality of users, and calculating a third estimated height value of the plurality of users according to the equipment height value and the third target depth value of the plurality of users;
calculating the calibration coefficients of the plurality of users according to the user height values of the plurality of users and the third estimated height values of the plurality of users, and carrying out average value calculation on the calibration coefficients of the plurality of users to obtain the height calibration coefficients.
7. The height detection method according to any one of claims 1-6, wherein the height correction is performed on the first estimated height value according to the location area, and further comprising, after obtaining a target height value:
carrying out identity recognition on the user to obtain identity ID information of the user;
binding the identity ID information of the user and the target height value of the user to obtain the label information of the user, and carrying out target tracking on the user according to the label information of the user.
8. A height detection system, comprising:
the image processing module is used for extracting video frames of the acquired RGB video data and depth video data to obtain an RGB image of a target frame and a depth image to be processed, and carrying out pixel point alignment compensation on the RGB image and the depth image to be processed to obtain the target depth image;
the acquisition module is used for acquiring a first detection frame of the user head in the target depth image and acquiring a first target pixel point set in the vicinity of a first center point of the first detection frame;
the height estimation module is used for calculating a first target depth value according to the depth value of each pixel point in the first target pixel point set and calculating a first estimated height value according to the first equipment height value of the acquisition equipment and the target depth value;
the height correction module is used for determining a position area where a user is located when video data are collected, and correcting the height of the first estimated height value according to the position area to obtain a target height value.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the height detection method according to any one of claims 1 to 7 when executing the computer program.
10. A non-transitory computer readable storage medium comprising a computer program, characterized in that the computer program, when executed by a processor, implements the height detection method of any one of claims 1 to 7.
CN202311024297.0A 2023-08-14 2023-08-14 Height detection method, system, electronic device and storage medium Pending CN117078735A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311024297.0A CN117078735A (en) 2023-08-14 2023-08-14 Height detection method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311024297.0A CN117078735A (en) 2023-08-14 2023-08-14 Height detection method, system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN117078735A true CN117078735A (en) 2023-11-17

Family

ID=88714536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311024297.0A Pending CN117078735A (en) 2023-08-14 2023-08-14 Height detection method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN117078735A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689709A (en) * 2024-02-04 2024-03-12 艾弗世(苏州)专用设备股份有限公司 Height detection method, system, equipment and medium based on depth image

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102657532A (en) * 2012-05-04 2012-09-12 深圳泰山在线科技有限公司 Height measuring method and device based on body posture identification
CN108053370A (en) * 2017-11-29 2018-05-18 合肥工业大学 A kind of imager coordinate bearing calibration inhibited based on matching error
CN111079589A (en) * 2019-12-04 2020-04-28 常州工业职业技术学院 Automatic height detection method based on depth camera shooting and height threshold value pixel calibration
CN113627343A (en) * 2021-08-11 2021-11-09 深圳市捷顺科技实业股份有限公司 Pedestrian height detection method, device and equipment and readable storage medium
CN113749646A (en) * 2021-09-03 2021-12-07 中科视语(北京)科技有限公司 Monocular vision-based human body height measuring method and device and electronic equipment
CN113838117A (en) * 2021-08-06 2021-12-24 公安部物证鉴定中心 Height estimation method based on plantar pressure
CN114820758A (en) * 2021-01-12 2022-07-29 富泰华工业(深圳)有限公司 Plant growth height measuring method, device, electronic device and medium
WO2023103377A1 (en) * 2021-12-09 2023-06-15 上海商汤智能科技有限公司 Calibration method and apparatus, electronic device, storage medium, and computer program product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102657532A (en) * 2012-05-04 2012-09-12 深圳泰山在线科技有限公司 Height measuring method and device based on body posture identification
CN108053370A (en) * 2017-11-29 2018-05-18 合肥工业大学 A kind of imager coordinate bearing calibration inhibited based on matching error
CN111079589A (en) * 2019-12-04 2020-04-28 常州工业职业技术学院 Automatic height detection method based on depth camera shooting and height threshold value pixel calibration
CN114820758A (en) * 2021-01-12 2022-07-29 富泰华工业(深圳)有限公司 Plant growth height measuring method, device, electronic device and medium
CN113838117A (en) * 2021-08-06 2021-12-24 公安部物证鉴定中心 Height estimation method based on plantar pressure
CN113627343A (en) * 2021-08-11 2021-11-09 深圳市捷顺科技实业股份有限公司 Pedestrian height detection method, device and equipment and readable storage medium
CN113749646A (en) * 2021-09-03 2021-12-07 中科视语(北京)科技有限公司 Monocular vision-based human body height measuring method and device and electronic equipment
WO2023103377A1 (en) * 2021-12-09 2023-06-15 上海商汤智能科技有限公司 Calibration method and apparatus, electronic device, storage medium, and computer program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周峰等: "《输电线路损耗分析与预测》", 30 September 2021, pages: 170 - 171 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689709A (en) * 2024-02-04 2024-03-12 艾弗世(苏州)专用设备股份有限公司 Height detection method, system, equipment and medium based on depth image
CN117689709B (en) * 2024-02-04 2024-04-05 艾弗世(苏州)专用设备股份有限公司 Height detection method, system, equipment and medium based on depth image

Similar Documents

Publication Publication Date Title
US8385595B2 (en) Motion detection method, apparatus and system
US9183432B2 (en) People counting device and people trajectory analysis device
JP6458734B2 (en) Passenger number measuring device, passenger number measuring method, and passenger number measuring program
JP6494253B2 (en) Object detection apparatus, object detection method, image recognition apparatus, and computer program
US7664315B2 (en) Integrated image processor
EP2202671A2 (en) Subject tracking apparatus and control method therefor, image capturing apparatus, and display apparatus
US8428313B2 (en) Object image correction apparatus and method for object identification
US10872268B2 (en) Information processing device, information processing program, and information processing method
KR20140045854A (en) Method and apparatus for monitoring video for estimating gradient of single object
CN107977645B (en) Method and device for generating video news poster graph
US7692697B2 (en) Pupil color correction device and program
CN117078735A (en) Height detection method, system, electronic device and storage medium
JP5271227B2 (en) Crowd monitoring device, method and program
EP3745348B1 (en) Image processing for removing fog or haze in images
US8213720B2 (en) System and method for determining chin position in a digital image
CN109344758B (en) Face recognition method based on improved local binary pattern
JP2007312206A (en) Imaging apparatus and image reproducing apparatus
KR101146417B1 (en) Apparatus and method for tracking salient human face in robot surveillance
KR101285127B1 (en) Apparatus for monitoring loading material of vehicle
CN116152855A (en) Method and system for identifying passing gate of child pulling based on multi-mode lens
CN113642546B (en) Multi-face tracking method and system
CN112686173B (en) Passenger flow counting method and device, electronic equipment and storage medium
KR101051390B1 (en) Apparatus and method for estimating object information of surveillance camera
US9761275B2 (en) System and method for spatiotemporal image fusion and integration
JP2013210778A (en) Imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination