CN114550077A - People flow statistical method and system based on panoramic image - Google Patents

People flow statistical method and system based on panoramic image Download PDF

Info

Publication number
CN114550077A
CN114550077A CN202210022221.3A CN202210022221A CN114550077A CN 114550077 A CN114550077 A CN 114550077A CN 202210022221 A CN202210022221 A CN 202210022221A CN 114550077 A CN114550077 A CN 114550077A
Authority
CN
China
Prior art keywords
image
camera
panoramic
transformation
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210022221.3A
Other languages
Chinese (zh)
Inventor
冯国徽
秦川
钱振兴
张新鹏
李晓龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast Digital Economic Development Research Institute
Original Assignee
Southeast Digital Economic Development Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast Digital Economic Development Research Institute filed Critical Southeast Digital Economic Development Research Institute
Priority to CN202210022221.3A priority Critical patent/CN114550077A/en
Publication of CN114550077A publication Critical patent/CN114550077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pedestrian volume statistical method based on panoramic images. The specific method comprises the steps of 1) collecting images of a camera; 2) extracting and matching the characteristics of the images; 3) calculating and correcting camera parameters by using the matched features; 4) panoramic stitching and fusion are carried out on the output pictures of the camera capable of forming the panorama by using the corrected camera parameters, so that a panoramic stitching device can be obtained; 5) training a head detection classification detector by using an AdaBoost algorithm with Haar-like characteristics; 6) designing a video display platform by using Qt; 7) and a panoramic and head detection module is deployed, and end-to-end people flow statistics can be realized by inputting camera parameters. The invention can promote the intelligent management of tourist attraction and provide schedule reference information for travelers, and can also be expanded to the medical examination queuing management.

Description

People flow statistical method and system based on panoramic image
Technical Field
The invention relates to the technical field of people flow statistics, in particular to a people flow statistics method and system based on panoramic images.
Background
The intelligent tourism is a realistic expression of the vigorous development of the tourism industry, wherein a people flow statistical system is distributed in each scenic spot, the system is used for solving the problem that the risk of trample accidents is increased due to too much people flow in the scenic spots, and the system is also a requirement of epidemic prevention standard management at present.
At present, a common passenger flow volume statistical system in a tourist attraction achieves the management purpose through the control on the total passenger flow volume. The conventional statistical method is that a ticket monitoring system is arranged at an entrance and an exit, the system performs statistics on the total amount and displays the data volume on an electronic display screen, although the traditional electronic screen is upgraded to a digital twin platform type, the visual large screen still displays the global data volume, so the congestion conditions of scenic spots in a tourist attraction are manually controlled by tourist conscious awareness and managers, specifically, the scenic spot managers perform statistics on the flow of people at each position through the picture feedback condition of each fixed-point camera, and then comprehensively judge and obtain management decisions. However, the overlapping area of the cameras easily causes repeated statistics, increases the cost of manual statistics, and the area which is not monitored is difficult to intelligently acquire the human flow, so that manual management is still required, which brings a challenge to the timeliness of solving the accident risk problem. For scenic spot tourists, the playing strategy is usually planned by the tourism experience of other people, the playing route is difficult to adjust under the condition that the crowd flow information of the scenic spot is not obtained, the obtained playing pleasure is uncertain, and the playing experience and experience of propaganda and other people in the playing scenic spot are generated.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the invention provides a people flow statistical method and a people flow statistical system based on a panoramic image. The technical scheme is as follows:
in one aspect, a people flow statistical method based on panoramic images is provided, which includes:
acquiring a camera image, and carrying out panoramic splicing and fusion on the camera image;
detecting the number of human heads in the panoramic image;
and acquiring the real-time flow rate of people according to the detected head.
Further, the step of acquiring a camera image and performing panoramic stitching fusion on the camera image specifically comprises:
step 1) acquiring camera images, including deploying cameras on a unified horizontal line, reading camera streams and setting image resolution;
step 2) extracting the features of each processed image by using an SIFT algorithm, and then calculating by using a nearest neighbor method to complete the calculation of the optimal matching point;
step 3) calculating camera parameters by using the matched features and correcting the camera parameters;
and 4) carrying out panoramic splicing and fusion on the camera output pictures capable of forming the panorama by using the corrected camera parameters.
Further, the step 3) is specifically as follows:
calculating optimal estimated parameters of a camera according to affine transformation and optimal matching points, correcting the estimated camera parameters by using a beam adjustment method to reduce the feature distortion state in a panorama caused by the parameters, then correcting and format conversion of the camera parameters, obtaining the sequence of images spliced into the panorama according to an optimal adjacent matching standard, extracting image corner features by using the previously calculated camera parameters and input images according to the sequence through affine transformation, obtaining mask correction parameters by using an input image mask and the previous camera parameters through affine transformation, performing exposure compensation on the input image transformation parameters, the mask and the calculated image corners, then calculating gaps among the matched features by using a graph cut method, and storing the gaps in the corrected mask transformation parameters.
Further, the step 4) is specifically as follows:
calculating current image transformation according to the previously calculated camera parameters, the image corner points and the image taken out by the real-time output stream, calculating mask transformation of the current image according to the camera parameters, carrying out exposure compensation according to the corner points, the real-time input image and the newly acquired mask transformation, then carrying out expansion and size transformation on the mask transformation calculated in the step 3), combining the result with the new mask transformation in the current step to obtain a mask with optimized gaps, and finally fusing the current real-time image transformation, the finally optimized mask and the image corner points by using a multi-frequency fusion algorithm to obtain a spliced panoramic image.
Further, the step of obtaining the stitched panoramic image by fusing the current real-time image transformation, the final optimized mask and the image corner points by using the multi-frequency fusion algorithm specifically comprises:
1) calculating a Gaussian pyramid of the input image;
2) calculating a Laplacian pyramid of the input image;
3) and fusing the Laplacian pyramids at the same level. For example, using a simple linear fusion on both sides of the splice seam;
4) sequentially expanding the Laplacian pyramid at the high layer until the Laplacian pyramid has the same resolution as the source image;
5) and superposing the images of the cameras forming the panorama in sequence to obtain a final output image.
Further, the step of detecting the number of people in the panoramic image is specifically;
the method comprises the following steps of realizing a human head detector by using an AdaBoost cascade classifier with Haar-like characteristics;
the method specifically comprises the following steps:
1) describing image global information by using an integrogram, wherein the integrogram of the specified coordinates of the image is the sum of all pixels at the upper left corner of the position, calculating the pixel value of an image area by analogy, and then acquiring the global characteristic value of the image by using the integrogram;
2) and constructing a classification detector according to the obtained features, and reducing the false recognition rate by improving the recognition rate to obtain an optimal model, wherein the positioning loss uses the square difference loss of the feature point position and the actual position.
In another aspect, a people flow statistic system based on panoramic images is provided, including:
the panoramic stitching module is used for acquiring a camera image and carrying out panoramic stitching fusion on the camera image;
a head detection module for detecting the number of the head in the panoramic image
And the visual panoramic and statistical module is used for acquiring the real-time flow rate of people according to the detected head of people.
Further, the panorama stitching module comprises:
the system comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring images of a plurality of cameras in a target area, the cameras are deployed on the same plane and on a uniform horizontal line, and camera streams are read and converted into images through an vlc library;
the feature extraction unit is used for extracting features of each processed image by using an SIFT algorithm and then calculating by using a nearest neighbor method to complete the calculation of an optimal matching point;
a parameter correction unit for calculating and correcting camera parameters using the matched features;
and the panorama splicing unit is used for carrying out panorama splicing and fusion on the output pictures of the camera capable of forming the panorama by utilizing the corrected camera parameters.
Further, the human head detection module is used for being realized by using an AdaBoost cascade method, extracting a feature training classification model, and simultaneously directly using feature positions and square error regression to obtain a detection frame.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the invention provides a people flow statistical method based on panoramic images, which is used for obtaining an end-to-end people flow statistical system, the system can remotely monitor the number of people in a target area, density information is remotely provided through camera voice information according to a statistical result and panoramic visual information, and the area management efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of panorama stitching according to embodiment 1 of the present invention;
fig. 2 is an integral graph for describing image global information in embodiment 1 of the present invention;
FIG. 3 is a flow chart of human head detection in embodiment 1 of the present invention;
fig. 4 is a graphical interface of image display and number of people display in embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
The invention provides a people flow statistical method based on panoramic images, which is shown in figure 1 and comprises the following steps:
s1: capturing camera images, including deploying cameras at a uniform horizontal line and reading the camera stream and setting image resolution.
In this embodiment, the camera brand in S1 is not limited to one, and local video may be used, and different resolutions may be set based on the development environment hardware level image.
Specifically, where vlc third party libraries were used to read the camera real-time stream and convert to image frames in S1, the captured image size was changed to 640 x 384, with other resolutions also being selected based on different development environment hardware levels and camera counts.
S2: and extracting the features of each processed image by using an SIFT algorithm, and then calculating by using a nearest neighbor method to complete the calculation of the best matching point.
Specifically, the nearest neighbor matching used in S2 is based on the original feature point matching.
In this embodiment, in S2, a SIFT algorithm, that is, a scale-invariant feature transform is used to extract features, and a nearest neighbor algorithm is used to select the best matching features.
S3: calculating and correcting camera parameters by using the matched features;
specifically, camera optimal estimated parameters are calculated according to affine transformation and optimal matching points, the estimated camera parameters are corrected by using a beam adjustment method to reduce feature distortion states in a panorama caused by the parameters, correction and format conversion of the camera parameters are performed, image corner features are extracted through affine transformation by using the previously calculated camera parameters and an input image, mask correction parameters are obtained through affine transformation by using an input image mask and the previously calculated camera parameters, exposure compensation is performed on the input image transformation parameters, the mask and the calculated image corners, gaps between the matched features are calculated by using a graph cutting method and are stored in the corrected mask transformation parameters.
In this embodiment, affine transformation is used in S3 to obtain camera parameters, specifically 1) affine transformation is used to calculate an internal reference matrix, a focal length matrix, and a rotation transformation matrix of the camera from the best matching features obtained in S2; 2) the camera parameters, i.e. the above-mentioned internal reference matrix, focal length matrix and rotation matrix, are corrected using beam-balancing. The basic principle of bundle adjustment is that a plurality of images are shot from different angles and positions on an object to be measured, and then object points, image points and optical centers of all characteristic points on the object are all required to fall on corresponding light rays, namely, the collinear relationship of imaging is met, and the overall deviation of the collinear relationship is minimized through optimization in the adjustment process; 3) transforming the original image by using the calculated camera parameters and affine transformation, extracting the characteristics of the upper left corner points, transforming the mask of the original image by using the same method, and generating a transformed image, so that the direction of the transformed image characteristics is consistent with the direction of the matched image characteristics; 4) compensating the image after the original image transformation, the mask transformation image and the angular point exposure, and reducing the difference of the image characteristics after splicing and synthesis; 5) and smoothing gaps among image mask transformations by using a segmentation method, namely smoothing boundaries of feature stitching positions during panorama stitching, and eliminating feature differences caused by inaccurate feature matching.
S4: and carrying out panorama splicing and fusion on the output pictures of the cameras capable of forming the panorama by using the corrected parameters of the cameras.
Specifically, current image transformation is calculated according to the camera parameters and the image corner points which are calculated in the past and the image which is taken out by the real-time output stream, mask transformation of the current image is calculated according to the camera parameters, exposure compensation is carried out according to the corner points, the real-time input image and the newly acquired mask transformation, then expansion and size transformation are carried out on the mask transformation which is calculated in the S3, the result and the new mask transformation in the current step are combined to obtain a mask after gap optimization, and finally the current real-time image transformation, the final optimized mask and the image corner points are fused by using a multi-frequency fusion algorithm to obtain a spliced panoramic image.
Further, the multi-band fusion step in S4 is:
1) calculating a Gaussian pyramid of the input image;
2) calculating a Laplacian pyramid of the input image;
3) and fusing the Laplacian pyramids at the same level. For example, using a simple linear fusion on both sides of the splice seam;
4) sequentially expanding the Laplacian pyramid at the high layer until the Laplacian pyramid has the same resolution as the source image;
5) and sequentially superposing the images of the cameras forming the panorama to obtain a final output image.
In this embodiment, S4 mainly covers the real-time panorama stitching process, where the new image transformation and mask transformation are calculated by using the previous camera parameters and the real-time stream of the camera, and then the previously calculated mask and multi-band fusion method are integrated to generate a new panorama.
And S5, the step is realized by using an AdaBoost cascade method, a feature training classification model is extracted, and meanwhile, a detection frame is obtained by directly using feature position and square error regression.
Specifically, an AdaBoost cascade classifier using Haar-like features implements a human head detector. The method specifically comprises the following steps:
1) referring to fig. 2, the image global information is described by using an integrogram, the integrogram of the image specified coordinate is the sum of all pixels at the upper left corner of the position, and by analogy, the pixel value of the image area is calculated, and then the integrogram is used for obtaining the global characteristic value of the image; referring to fig. 3, the integral graph is used to describe the global information of the image, and the formula used is ii (x, y) ═ Σx'≤x,y'≤yi (x ', y'), where i (x ', y') is the pixel value at the original image (x ', y'), ii (x, y) is the integral map at (x, y), then the pixel value of region D is the pixel value of region a + B + C + D + region a-region a + B-region a + C, that is, the pixel value of the region D is ii (4) + ii (1) -ii (2) -ii (3), ii (1) represents the pixel value of the region a, ii (2) represents the pixel value of the region a + B, ii (3) represents the pixel value of the region a + C, ii (4) represents the pixel value of the region a + B + C + D, calculating the pixel values of the adjacent regions by analogy, and obtaining a characteristic value by adding and subtracting the endpoint integrograms of the rectangular graph by subtracting the pixel values of the adjacent regions;
2) and constructing a classification detector according to the obtained features, and reducing the false recognition rate by improving the recognition rate to obtain an optimal model, wherein the positioning loss uses the square difference loss of the feature point position and the actual position.
In the embodiment, the method for extracting the Haar features in the step 1) in the step S5 is selected based on the fact that the calculated amount of the Haar feature extracted by a Haar feature template is large, and the process of extracting the Haar features by modularly accelerating an integrogram is adopted;
further, referring to fig. 3, the overall process of human head detection is included, and the specific logic module is 1) acquiring images as data samples to cover various situations that may occur in practical application as much as possible, wherein half of the images of the samples include images of human head features at multiple angles as training positive samples, the other half of the images do not include human head features as negative samples, the size of the images is normalized to be a size easy to train, and then all the samples are labeled, including categories and target frames; 2) extracting Haar characteristics; 3) generating a weak classifier, and counting the training samples by using the position information of the Haar characteristics to obtain corresponding characteristic parameters; 4) selecting an optimized weak classifier by using an AdaBoost algorithm, and acquiring a basic binary classifier; 5) selecting an optimal weak classifier by using an AdaBoost algorithm under the limits of a training detectable rate and a misjudgment rate by taking a weak classifier set as input; 6) and carrying out linear combination on the weak classifiers to obtain a strong classifier as a final detector.
S6: the method realizes a video display platform, and the part is realized by using qt, thereby realizing panoramic display and display of the total number of people in the panoramic view;
s7: the panoramic algorithm and the human head detector are deployed to the platform implemented at S6.
Example 2
The invention provides a people flow statistical system based on panoramic images, which comprises:
the panoramic stitching module is used for acquiring a camera image and carrying out panoramic stitching fusion on the camera image;
a head detection module for detecting the number of the head in the panoramic image
And the visual panoramic and statistical module is used for acquiring the real-time flow rate of people according to the detected head of people.
Further, the panorama stitching module comprises:
the system comprises an image acquisition unit, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring images of a plurality of cameras in a target area, the cameras are deployed on the same plane and on a uniform horizontal line, and camera streams are read and converted into images through an vlc library;
the feature extraction unit is used for extracting features of each processed image by using an SIFT algorithm and then calculating by using a nearest neighbor method to complete the calculation of an optimal matching point;
a parameter correction unit for calculating and correcting camera parameters using the matched features;
and the panorama splicing unit is used for carrying out panorama splicing and fusion on the output pictures of the camera capable of forming the panorama by utilizing the corrected camera parameters.
Further, the human head detection module is used for being realized by using an AdaBoost cascade method, extracting a feature training classification model, and simultaneously directly using feature positions and square error regression to obtain a detection frame.
Further, the visual panorama and statistics module employs video display for displaying the total number of people in the panorama and panorama.
Specifically, including a graphical interface for implementing an image display and people number display, see fig. 4, specific modules include parameter input, panoramic display, and real-time data display, wherein the parameter input module can input camera parameters and local video addresses.
The trained panoramic fusion device and the head detector are deployed and combined on an interface, and a panoramic head detection system is finally obtained and can be used in crowd queuing and traffic control in various industries in an expanded mode.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the invention provides a people flow statistical method based on panoramic images, which is used for obtaining an end-to-end people flow statistical system, the system can remotely monitor the number of people in a target area, density information is remotely provided through camera voice information according to a statistical result and panoramic visual information, and the area management efficiency is improved.
The present invention is not limited to the above embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A people flow statistical method based on panoramic images is characterized by comprising the following steps:
acquiring a camera image, and carrying out panoramic splicing and fusion on the camera image;
detecting the number of human heads in the panoramic image;
and acquiring the real-time flow rate of people according to the detected head.
2. The method according to claim 1, wherein the step of acquiring the camera image and performing panoramic stitching fusion on the camera image specifically comprises:
step 1) acquiring camera images, including deploying cameras on a unified horizontal line, reading camera streams and setting image resolution;
step 2) extracting the features of each processed image by using an SIFT algorithm, and then calculating by using a nearest neighbor method to complete the calculation of the optimal matching point;
step 3) calculating camera parameters by using the matched features and correcting the camera parameters;
and 4) carrying out panoramic splicing and fusion on the camera output pictures capable of forming the panorama by using the corrected camera parameters.
3. The method according to claim 2, wherein the step 3) is specifically:
calculating optimal estimated parameters of a camera according to affine transformation and optimal matching points, correcting the estimated camera parameters by using a beam adjustment method to reduce the feature distortion state in a panorama caused by the parameters, then correcting and format conversion of the camera parameters, obtaining the sequence of images spliced into the panorama according to an optimal adjacent matching standard, extracting image corner features by using the previously calculated camera parameters and input images according to the sequence through affine transformation, obtaining mask correction parameters by using an input image mask and the previous camera parameters through affine transformation, performing exposure compensation on the input image transformation parameters, the mask and the calculated image corners, then calculating gaps among the matched features by using a graph cut method, and storing the gaps in the corrected mask transformation parameters.
4. The method according to claim 3, wherein the step 4) is specifically:
calculating current image transformation according to the previously calculated camera parameters, the image corner points and the image taken out by the real-time output stream, calculating mask transformation of the current image according to the camera parameters, carrying out exposure compensation according to the corner points, the real-time input image and the newly acquired mask transformation, then carrying out expansion and size transformation on the mask transformation calculated in the step 3), combining the result with the new mask transformation in the current step to obtain a mask with optimized gaps, and finally fusing the current real-time image transformation, the finally optimized mask and the image corner points by using a multi-frequency fusion algorithm to obtain a spliced panoramic image.
5. The method according to claim 4, wherein the step of obtaining the stitched panoramic image by fusing the current real-time image transformation, the final optimization mask and the image corner points by using the multi-frequency fusion algorithm specifically comprises:
1) calculating a Gaussian pyramid of the input image;
2) calculating a Laplacian pyramid of the input image;
3) fusing the Laplacian pyramids at the same level; for example, using a simple linear fusion on both sides of the splice seam;
4) sequentially expanding the Laplacian pyramid at the high layer until the Laplacian pyramid has the same resolution as the source image;
5) and superposing the images of the cameras forming the panorama in sequence to obtain a final output image.
6. The method according to claim 5, wherein the step of detecting the number of persons in the panoramic image is embodied as;
the method comprises the following steps of realizing a human head detector by using an AdaBoost cascade classifier with Haar-like characteristics;
the method specifically comprises the following steps:
1) describing image global information by using an integrogram, wherein the integrogram of the specified coordinates of the image is the sum of all pixels at the upper left corner of the position, calculating the pixel value of an image area by analogy, and then acquiring the global characteristic value of the image by using the integrogram;
2) and constructing a classification detector according to the obtained features, and reducing the false recognition rate by improving the recognition rate to obtain an optimal model, wherein the positioning loss uses the square difference loss of the feature point position and the actual position.
7. A people flow statistical system based on panoramic images is characterized by comprising:
the panoramic stitching module is used for acquiring a camera image and carrying out panoramic stitching fusion on the camera image;
the head detection module is used for detecting the number of the heads in the panoramic image;
and the visual panoramic and statistical module is used for acquiring the real-time flow rate of people according to the detected head of people.
8. The system of claim 7, comprising:
the system comprises an image acquisition unit, a data processing unit and a data processing unit, wherein the image acquisition unit is used for acquiring images of a plurality of cameras in a target area, the cameras are deployed on the same plane and on a uniform horizontal line, and camera streams are read and converted into images through an vlc library;
the feature extraction unit is used for extracting features of each processed image by using an SIFT algorithm and then calculating by using a nearest neighbor method to complete the calculation of the optimal matching point;
a parameter correction unit for calculating and correcting the camera parameters using the matched features;
and the panorama splicing unit is used for carrying out panorama splicing and fusion on the output pictures of the camera capable of forming the panorama by utilizing the corrected camera parameters.
9. The system of claim 8, wherein the human head detection module is configured to be implemented using an AdaBoost cascade method, extract features to train a classification model, and directly use feature positions and a square error regression to obtain a detection box.
CN202210022221.3A 2022-01-10 2022-01-10 People flow statistical method and system based on panoramic image Pending CN114550077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210022221.3A CN114550077A (en) 2022-01-10 2022-01-10 People flow statistical method and system based on panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210022221.3A CN114550077A (en) 2022-01-10 2022-01-10 People flow statistical method and system based on panoramic image

Publications (1)

Publication Number Publication Date
CN114550077A true CN114550077A (en) 2022-05-27

Family

ID=81670036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210022221.3A Pending CN114550077A (en) 2022-01-10 2022-01-10 People flow statistical method and system based on panoramic image

Country Status (1)

Country Link
CN (1) CN114550077A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223160A (en) * 2022-09-20 2022-10-21 恒银金融科技股份有限公司 Design method of sign recognition system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115223160A (en) * 2022-09-20 2022-10-21 恒银金融科技股份有限公司 Design method of sign recognition system

Similar Documents

Publication Publication Date Title
US7024053B2 (en) Method of image processing and electronic camera
US10958854B2 (en) Computer-implemented method for generating an output video from multiple video sources
US8254643B2 (en) Image processing method and device for object recognition
US7583858B2 (en) Image processing based on direction of gravity
CN111583116A (en) Video panorama stitching and fusing method and system based on multi-camera cross photography
WO2021177324A1 (en) Image generating device, image generating method, recording medium generating method, learning model generating device, learning model generating method, learning model, data processing device, data processing method, inferring method, electronic instrument, generating method, program, and non-transitory computer-readable medium
US6633303B2 (en) Method, system and record medium for generating wide-area high-resolution image
CN101605209A (en) Camera head and image-reproducing apparatus
JP2007058634A (en) Image processing method and image processor, digital camera equipment, and recording medium with image processing program stored thereon
KR20070061382A (en) Camera system, camera control apparatus, panorama image making method and computer program product
CN114785960B (en) 360 degree panorama vehicle event data recorder system based on wireless transmission technology
US20100002074A1 (en) Method, device, and computer program for reducing the resolution of an input image
CN114973028B (en) Aerial video image real-time change detection method and system
CN113160053B (en) Pose information-based underwater video image restoration and splicing method
CN113052170B (en) Small target license plate recognition method under unconstrained scene
JP2007067847A (en) Image processing method and apparatus, digital camera apparatus, and recording medium recorded with image processing program
US20130208984A1 (en) Content scene determination device
CN114550077A (en) People flow statistical method and system based on panoramic image
JP4882577B2 (en) Object tracking device and control method thereof, object tracking system, object tracking program, and recording medium recording the program
JP2002056381A (en) Road state image processing device
CN112150355B (en) Image processing method and related equipment
JP6875646B2 (en) Image processing device and image processing program
KR102496362B1 (en) System and method for producing video content based on artificial intelligence
CN112203023B (en) Billion pixel video generation method and device, equipment and medium
JP6820489B2 (en) Image processing device and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination