CN111523347A - Image detection method and device, computer equipment and storage medium - Google Patents

Image detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111523347A
CN111523347A CN201910104736.6A CN201910104736A CN111523347A CN 111523347 A CN111523347 A CN 111523347A CN 201910104736 A CN201910104736 A CN 201910104736A CN 111523347 A CN111523347 A CN 111523347A
Authority
CN
China
Prior art keywords
image
frame
video
strategy
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910104736.6A
Other languages
Chinese (zh)
Inventor
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201910104736.6A priority Critical patent/CN111523347A/en
Publication of CN111523347A publication Critical patent/CN111523347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The embodiment of the invention discloses an image detection method, an image detection device, computer equipment and a storage medium, wherein the image detection method comprises the following steps: acquiring video data of a target video to be detected, wherein the video data comprises the acquisition time of the target video; searching a frame extracting strategy having a mapping relation with the acquisition time in a preset strategy data list, wherein the frame extracting strategy is a first frame extracting frequency for extracting a frame picture image in the target video; performing frame extraction on the video data according to the frame extraction strategy to obtain a frame image; and carrying out preset target object detection processing on the frame image. The number of the frame images obtained in unit time is adjusted, so that the processor can relax the corresponding calculation capability of the input with the degree of distribution in different time periods, the loss of the processor is reduced, the energy is saved, and the cost of image detection is effectively saved.

Description

Image detection method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of image detection, in particular to an image detection method, an image detection device, computer equipment and a storage medium.
Background
Since the advent of mathematical methods that simulate the human real neural network, it has become slowly customary to refer to such artificial neural networks directly as neural networks. The neural network has a wide and attractive prospect in the fields of system identification, pattern recognition, intelligent control and the like, particularly in intelligent control, people are particularly interested in the self-learning function of the neural network, and the important characteristic of the neural network is regarded as one of key keys for solving the problem of adaptability of the controller in automatic control.
In the prior art, a neural network model is used for image detection, when image detection is performed, a camera shoots an environmental image in a detection area in real time, then shot video data is decomposed into a plurality of frame image images, and finally, target object detection and judgment are sequentially performed on the plurality of frame image images through the neural network model. Therefore, in the prior art, when the target object is detected and judged, the working pressure of the processor is high, the energy consumption is high, and the cost of image detection is increased.
Disclosure of Invention
The embodiment of the invention provides an image detection method, an image detection device, computer equipment and a storage medium, wherein the image detection method, the image detection device, the computer equipment and the storage medium can be used for reducing the operation pressure of a processor after the image frame extraction processing is carried out on a shot target video.
In order to solve the above technical problem, the embodiment of the present invention adopts a technical solution that: an image detection method is provided, which includes:
acquiring video data of a target video to be detected, wherein the video data comprises the acquisition time of the target video;
searching a frame extracting strategy having a mapping relation with the acquisition time in a preset strategy data list, wherein the frame extracting strategy is a first frame extracting frequency for extracting a frame picture image in the target video;
performing frame extraction on the video data according to the frame extraction strategy to obtain a frame image;
and carrying out preset target object detection processing on the frame image.
Optionally, the acquiring the video data of the target video to be detected includes:
counting the occurrence times of the detected target object images in the target video within each preset time period;
calculating the occurrence probability of the target object image in the corresponding time period according to the occurrence times;
and calculating the frame extraction strategy of the target video in each preset time period according to the occurrence probability, and writing the frame extraction strategy into a strategy data list.
Optionally, the searching for the frame extraction policy having a mapping relationship with the acquisition time in the preset policy data list includes:
searching a time period corresponding to the acquisition time in the strategy data list;
and calling a frame extraction strategy having a mapping relation with the time period as the frame extraction strategy of the target video.
Optionally, the performing of the preset target object detection processing on the frame image includes:
inputting the frame image into a preset image processing model, wherein the image processing model is a neural network model which is trained to a convergence state in advance and used for classifying whether a target object exists in the image or not;
and reading a classification result output by the image processing model to confirm whether a preset target object image exists in the video data.
Optionally, after sequentially reading the classification result output by the image processing model to determine whether a preset target object image exists in the video data, the method includes:
when the frame image has the target object image, calling a preset frequency threshold;
comparing the first frame extraction frequency of the target video with the frequency threshold;
and when the first frame extraction frequency is smaller than the frequency threshold, increasing the first frame extraction frequency of the target video after the frame image time sequence.
Optionally, after sequentially reading the classification result output by the image processing model to determine whether a preset target object image exists in the video data, the method includes:
when any one of the frame image images has the target object image, calculating the moving speed of the target object in a preset detection monitoring area;
searching a second frame extracting frequency which has a mapping relation with the moving speed in a preset detection frequency data list;
and taking the second frame extraction frequency as the frame extraction frequency of the target video after the frame picture image time sequence.
Optionally, the video data further includes sharpness information of a target video, and the taking the second frame extraction frequency as a frame extraction frequency of the target video after the frame picture image timing includes:
acquiring the video definition of the target video;
comparing the video definition with a preset definition threshold;
and when the video definition is smaller than the definition threshold value, improving the definition of the target video behind the frame image time sequence.
To solve the above technical problem, an embodiment of the present invention further provides an image detection apparatus, including:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring video data of a target video to be detected, and the video data comprises the acquisition time of the target video;
the searching module is used for searching a frame extracting strategy which has a mapping relation with the acquisition time in a preset strategy data list, wherein the frame extracting strategy is a first frame extracting frequency for extracting a frame image in the target video;
the processing module is used for extracting frames from the video data according to the frame extracting strategy to obtain a frame image;
and the execution module is used for carrying out preset target object detection processing on the frame picture image.
Optionally, the image detection apparatus further includes:
the first statistic submodule is used for counting the occurrence times of the detected target object images in the target video within each preset time period;
the first processing submodule is used for calculating the occurrence probability of the target object image in the corresponding time period according to the occurrence times;
and the first execution submodule is used for calculating the frame extraction strategy of the target video in each preset time period according to the occurrence probability and writing the frame extraction strategy into a strategy data list.
Optionally, the image detection apparatus further includes:
the second processing submodule is used for searching a time period which has a corresponding relation with the acquisition time in the strategy data list;
and the second execution submodule is used for calling a frame extraction strategy having a mapping relation with the time period as the frame extraction strategy of the target video.
Optionally, the image detection apparatus further includes:
the third processing submodule is used for inputting the frame image into a preset image processing model, wherein the image processing model is a neural network model which is trained to a convergence state in advance and used for classifying whether a target object exists in the image or not;
and the third execution submodule is used for reading the classification result output by the image processing model so as to confirm whether a preset target object image exists in the video data.
Optionally, the image detection apparatus further includes:
the fourth processing submodule is used for calling a preset frequency threshold value when the frame picture image has the target object image;
the first comparison sub-module is used for comparing a first frame extraction frequency of the target video with the frequency threshold;
and the fourth execution sub-module is used for increasing the first frame extraction frequency of the target video after the frame image time sequence when the first frame extraction frequency is smaller than the frequency threshold.
Optionally, the image detection apparatus further includes:
the fifth processing submodule is used for calculating the moving speed of the target object in a preset detection monitoring area when any one of the frame image images has the target object image;
the first searching submodule is used for searching a second frame extracting frequency which has a mapping relation with the moving speed in a preset detection frequency data list;
and the fifth execution sub-module is used for taking the second frame extraction frequency as the frame extraction frequency of the target video after the frame picture image time sequence.
Optionally, the image detection apparatus further includes:
the first obtaining submodule is used for obtaining the video definition of the target video;
the second comparison submodule is used for comparing the video definition with a preset definition threshold;
and the sixth processing submodule is used for improving the definition of the target video after the frame image time sequence when the video definition is smaller than the definition threshold.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, including a memory and a processor, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, cause the processor to perform the steps of the image detection method described above.
To solve the above technical problem, an embodiment of the present invention further provides a storage medium storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to execute the steps of the image detection method.
The embodiment of the invention has the beneficial effects that: when image detection is carried out, according to the acquisition time of an acquired target video, a frame extraction strategy corresponding to the acquisition time is searched in a strategy data list, wherein the strategy data list is formed according to historical detection data, the probability of the target object appearing in a detection area in different time periods is recorded, and a corresponding first frame extraction frequency is set according to the probability. The frame extraction frequency is lower in a time period with lower probability of occurrence of the target object, otherwise, the frame extraction frequency is higher, so that the number of the frame images obtained in unit time is adjusted, the processor can relax corresponding calculation capability in different time periods, the loss of the processor is reduced, energy is saved, and the cost of image detection is effectively saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a basic process of an image detection method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating refining a policy data list by historical data according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating a process of searching for a corresponding frame extraction policy by collecting time according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a process of detecting an image of a target object by a neural network model according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a first method for increasing frame decimation frequency according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a second method for increasing frame decimation frequency according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating an embodiment of adjusting the sharpness of a target video according to the detection classification result;
FIG. 8 is a schematic diagram of a basic structure of an image detection apparatus according to an embodiment of the present invention;
FIG. 9 is a block diagram of the basic structure of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As will be appreciated by those skilled in the art, "terminal" as used herein includes both devices that are wireless signal receivers, devices that have only wireless signal receivers without transmit capability, and devices that include receive and transmit hardware, devices that have receive and transmit hardware capable of performing two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" or "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. As used herein, a "terminal Device" may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, or a smart tv, a set-top box, etc.
Specifically, referring to fig. 1, fig. 1 is a basic flow chart of the image detection method according to the present embodiment.
As shown in fig. 1, an image detection method includes:
s1100, acquiring video data of a target video to be detected, wherein the video data comprises the acquisition time of the target video;
in this embodiment, the target video is the detection video, and the target video is collected in real time. However, the definition of the target video is not limited thereto, and in some embodiments, when the historical detection video is searched and played back, the target video is the historical video that has been shot.
The target video is composed of a plurality of frame images, and the acquisition of each frame image has corresponding acquisition time. Therefore, the video data includes the capture time of each frame of picture image.
S1200, searching a frame extraction strategy having a mapping relation with the acquisition time in a preset strategy data list, wherein the frame extraction strategy is a first frame extraction frequency for extracting a frame image in the target video;
according to the acquisition time of the current acquisition target video, a frame extraction strategy having a mapping relation with the acquisition time is searched in a preset strategy data list, and the frame extraction strategy is a first frame extraction frequency for extracting frame images in the target video.
The strategy data list is formed according to historical detection data, records the probability of the target object appearing in the detection area in different time periods, and sets a corresponding first frame extraction frequency according to the probability. That is, the frame extraction frequency is low in the time period with low occurrence probability of the target object, otherwise, the frame extraction frequency is high, so that the number of the frame images obtained in the unit time is adjusted.
The frame extraction strategy is a key value pair between a time period and a first frame extraction frequency, and the first frame extraction frequencies in different time periods may have different values.
S1300, performing frame extraction on the video data according to the frame extraction strategy to obtain a frame image;
and performing frame extraction on the video data according to the found frame extraction strategy to obtain a frame image, wherein the first frame extraction frequency in the frame extraction strategy is actually the time interval between two adjacent frame extraction nodes. For example, when the first frame extraction frequency is 2 times per second, the time interval between two adjacent frame extraction nodes is 0.5 second.
And according to the frame extraction interval represented by the first frame extraction frequency, performing frame extraction on the target video shot in real time or the historical target video shot completely. And extracting the frame, namely extracting the frame image with the same acquisition time as the frame extraction time.
And S1400, carrying out preset target object detection processing on the frame picture image.
And carrying out target object detection processing on the extracted frame image. The object detection processing is to identify whether an image of an object exists in the frame image. The object can be a pre-designated person, animal, car or other object with motion capability. For example, whether a human-shaped image exists in the frame image is detected.
In some embodiments, the neural network model is used to perform object detection processing on the frame image. For example, the image processing model is used to perform object detection processing on the frame image.
The image processing model can be a convolutional neural network model (CNN) that has been trained to a convergent state, but the image processing model can also be: a deep neural network model (DNN), a recurrent neural network model (RNN), or a variant of the three network models described above.
When the image processing model is trained, a large number of training samples with target object images are adopted for training of target object image recognition, and after the training is carried out to a convergence state, the image processing model can accurately extract and recognize the target object images.
In some embodiments, after obtaining the frame image by frame extraction, the terminal for detection sends the frame image to the server, and the server performs target object detection processing, and the server can also perform target object detection processing on the frame image by using a neural network model.
When image detection is carried out in the above-mentioned fact mode, according to the acquisition time of the acquired target video, a frame extraction strategy corresponding to the acquisition time is searched in a strategy data list, wherein the strategy data list is formed according to historical detection data, the probability of the target object appearing in the detection area in different time periods is recorded, and a corresponding first frame extraction frequency is set according to the probability. The frame extraction frequency is lower in a time period with lower probability of occurrence of the target object, otherwise, the frame extraction frequency is higher, so that the number of the frame images obtained in unit time is adjusted, the processor can relax corresponding calculation capability in different time periods, the loss of the processor is reduced, energy is saved, and the cost of image detection is effectively saved.
In some embodiments, the detection system carrying the image detection method in this embodiment is an evolutionary system with self-learning capability, and the evolutionary capability thereof can be embodied by collecting historical data to refine the self-policy data list. Referring to fig. 2, fig. 2 is a schematic flow chart illustrating refining a policy data list through historical data according to the present embodiment.
As shown in fig. 2, before the step S1100 shown in fig. 1, the method includes:
s1011, counting the occurrence frequency of the detected target object image in the target video within each preset time period;
in this embodiment, the time required for image detection is divided into a plurality of different time periods, and the number of times of detecting the target image in each time period is counted. For example, when 24-hour continuous detection is required, 24 hours are divided into 24 time periods, each of which is 1 hour. The time period division is not limited to this, and according to different application scenarios, in some embodiments, the set duration of each time period can be inconsistent, for example, 02: 00-06:00 is set as a time period, 08: 00-09:00 is set to one time period.
S1012, calculating the occurrence probability of the target object image in the corresponding time period according to the occurrence times;
according to the counted times of detecting the appearance of the target object image in each time period, calculating the appearance probability of the target object image in the corresponding time period, in this embodiment, the reference value of probability calculation is defined as 100, and the times/100 × 100% in the time period. For example, by detecting at 08: in the time period of 00-09:00, when the number of times of detecting the target object image is 60 times, 60/100 × 100% is 60%. However, the setting of the reference value is not limited to this, and in some embodiments, the reference value is (without limitation): 10. 30, 200, 800 or 1300, etc.
In some embodiments, the probability is calculated based on the number of days, the time period is recorded as one time when the target object image is detected once in one day, but the time period is also recorded as one time when the detected time is more than 1 time, and then the occurrence probability is obtained by dividing the number of times the target object image is detected in the time period by the number of days in multiple days. For example, once a 30 day statistic, 08: 00-09:00 the number of days for detecting the target object image is 18 days, and the probability of occurrence in the time period is 18/30 × 100%: 60%.
And S1013, calculating the frame extraction strategy of the target video in each preset time period according to the occurrence probability, and writing the frame extraction strategy into a strategy data list.
And calculating the frame extraction strategy of the target video in each preset time period according to the calculated probability, wherein the frame extraction frequency of full-speed frame extraction is 25 frames per second, namely 25 frames of frame images are extracted per second, and an integer obtained by rounding off a numerical value obtained by multiplying the probability value by 25 is the first frame extraction frequency in the time period. For example, 25 × 60% 15, the corresponding frame rate in the time period is 15 times/second. When the probability of occurrence is zero, the lowest decimation frequency should be performed, e.g. 5 seconds/times. However, the setting of the lowest frame extraction frequency is not limited to this, and the value of the lowest frame extraction frequency can be customized according to the different application scenarios.
And after generating a first frame extraction frequency corresponding to each time period, writing the key value pair of the time period and the first frame extraction frequency into a strategy data list, and generating a frame extraction strategy in the time period.
Through the autonomous learning mode, the strategy data list most appropriate to the current detection environment can be perfected according to the requirements of the actual environment, detection resources are saved to the maximum extent, and detection cost is reduced.
In some embodiments, after acquiring the acquisition time of the target image, the corresponding frame extraction policy needs to be found in the policy data list according to the acquisition time. Referring to fig. 3, fig. 3 is a schematic flow chart illustrating the process of searching for the corresponding frame-extracting policy by the acquisition time according to the present embodiment.
As shown in fig. 3, the S1200 step shown in fig. 1 includes:
s1211, searching a time period corresponding to the acquisition time in the strategy data list;
when the target video is collected, the collection time of the collected target video is used as a search keyword, and a time period corresponding to the collection time is searched in the strategy data list by the search keyword. For example, if the acquisition time is 08:30, the corresponding time period is 08: 00-09:00.
And S1212, calling a frame extraction strategy having a mapping relation with the time period as the frame extraction strategy of the target video.
And recording a frame extracting strategy corresponding to a time period in the strategy data list, wherein the frame extracting strategy comprises a first frame extracting frequency corresponding to the time period, calling the frame extracting strategy corresponding to the time period in the strategy data list after obtaining the time period information, and defining the frame extracting strategy as the frame extracting strategy of the target video in the current time period.
In some embodiments, the terminal performs the object image detection processing on the frame image locally. Referring to fig. 4, fig. 4 is a schematic flow chart illustrating a detection process performed on an image of a target object by a neural network model according to the present embodiment.
As shown in fig. 4, the step S1400 shown in fig. 1 includes:
s1411, inputting the frame image into a preset image processing model, wherein the image processing model is a neural network model which is trained to a convergence state in advance and used for classifying whether a target object exists in the image;
and carrying out target object detection processing on the frame image by adopting an image processing model. The image processing model can be a convolutional neural network model (CNN) that has been trained to a convergent state, but the image processing model can also be: a deep neural network model (DNN), a recurrent neural network model (RNN), or a variant of the three network models described above.
When the image processing model is trained, a large number of training samples with target object images are adopted for training of target object image recognition, and after the training is carried out to a convergence state, the image processing model can accurately extract and recognize the target object images.
And S1412, reading the classification result output by the image processing model to confirm whether a preset target object image exists in the video data.
Reading a classification result output by the image processing model, wherein the classification result records a result of judging whether a target image exists in the frame image or not by the image processing model, and if the classification result is 'yes', the frame image is shown to comprise the target object image; if the classification result is "none", it means that the target object image is not included in the frame image.
The target object image is detected through the neural network model, and the accuracy and the detection efficiency of the target object image detection can be improved.
In some embodiments, when it is detected that the target object image exists in the frame image, in order to continue tracking or enhance monitoring, the frame rate adjustment is performed for a time period with a lower frame rate, so as to enhance the detection density. Referring to fig. 5, fig. 5 is a schematic flow chart illustrating a first method for increasing a frame decimation frequency according to the present embodiment.
As shown in fig. 5, after S1412 shown in fig. 4, the method includes:
s1421, when the frame image has the target object image, calling a preset frequency threshold;
when the frame image is detected to include the target image, it indicates that the target object appears in the detection area, and the motion of the target object in the detection area needs to be strictly monitored, so that whether the current frame extraction frequency achieves the purpose of monitoring the motion of the target object needs to be measured.
The measurement mode is as follows: and calling a preset frequency threshold, wherein the frequency threshold is a numerical value for judging whether the frame extraction frequency of the current time period can reach the motion direction of the monitored target object. For example, the frequency threshold is 10 times per second. However, the value of the frequency threshold is not limited to this, and the frequency threshold can be set in a user-defined manner according to the actual needs according to different application scenarios. In some embodiments, the frequency threshold is dynamic, different frequency thresholds are invoked according to the number and the moving direction of the detected target object images, and the number and the direction of the target object images are proportional to the value of the frequency threshold.
S1422, comparing the first frame extraction frequency of the target video with the frequency threshold;
and comparing the first frame extraction frequency of the target video in the current time period with a frequency threshold value in a manner of comparing the values of the first frame extraction frequency and the frequency threshold value.
S1423, when the first frame extraction frequency is smaller than the frequency threshold, increasing the first frame extraction frequency of the target video after the frame image time sequence.
And when the first frame extraction frequency is less than the frequency threshold value, increasing the first frame extraction frequency of the target video positioned after the frame image time sequence. When the target object image is detected in the frame image and the first frame extraction frequency of the current time period is judged to be smaller than the frequency threshold, it is indicated that the current frame extraction frequency does not meet the purpose of monitoring the movement direction of the target object, and therefore, the frequency of the first frame extraction frequency needs to be increased. The frame extraction frequency of the target video after the frame image time sequence is increased to the frame extraction frequency represented by the frequency threshold.
When the first frame extraction frequency is greater than or equal to the frequency threshold, the first frame extraction frequency in the current time period is indicated to meet the purpose of monitoring the movement direction of the target object, and the first frame extraction frequency is continuously executed for frame extraction.
After the target object image is detected in the frame image, the first frame extraction frequency of the current time period is judged to reach the standard, and the frame extraction frequency is improved when the judgment is not carried out, so that the monitoring timeliness is effectively guaranteed.
In some embodiments, after the frame image has the target object image, the frame extraction frequency is correspondingly increased according to the moving speed of the target object image in the monitoring area, so as to achieve the purpose of monitoring the movement direction of the target object. Referring to fig. 6, fig. 6 is a flowchart illustrating a second method for increasing a frame rate according to the present embodiment.
As shown in fig. 6, after S1412 shown in fig. 4, the method includes:
s1431, when any one of the frame images has the target object image, calculating the moving speed of the target object in a preset detection monitoring area;
when the frame image is detected to include the target image, it indicates that the target object appears in the detection area, and the motion of the target object in the detection area needs to be strictly monitored, so that whether the current frame extraction frequency achieves the purpose of monitoring the motion of the target object needs to be measured.
The measurement mode is as follows: and calculating the moving speed of the target object in a preset detection monitoring area, and judging whether the current frame extraction frequency can adapt to the moving speed of the target object.
The moving speed can be completed through the speed measuring peripheral equipment, and after the target object image is detected, the speed measuring equipment which is externally arranged in the detection area is started to measure the speed of the target object. For example, a laser velocimeter or an ultrasonic velocimeter is used.
In some embodiments, an image processing technology is adopted to measure the speed of a target object, two frame images with the same target object image are obtained, because the shooting device is fixedly arranged, the distance between a detection area and the shooting device is calibrated during installation, the distance between the target object and the shooting equipment can be calculated according to the view proportion of the target object image in the frame images, the distance between the target object image in the two frame images and the shooting device is calculated, an included angle is formed by taking the shooting device as an origin with the two distance values as sides, then the moving distance of the target object in the two frame images is calculated through a calculation formula of triangle sides, the acquisition time of the two frame images is subtracted to obtain the moving time, and the moving speed of the target object in the detection area is obtained by dividing the moving distance by the moving time.
S1432, searching a preset detection frequency data list for a second frame extraction frequency having a mapping relation with the moving speed;
in this embodiment, a detection frequency data list is set, wherein the detection frequency data list records frame extraction frequencies corresponding to different moving speeds of the target object, and the frame extraction frequency corresponding to the moving speed is defined as a second frame extraction frequency. The second frame extraction frequency having a mapping relation with the moving speed can be found in the detected frequency data list through the moving speed.
And S1433, taking the second frame extraction frequency as the frame extraction frequency of the target video after the frame picture image time sequence.
And after finding out a second frame extraction frequency having a mapping relation with the moving speed of the current target object, taking the second frame extraction frequency as the frame extraction frequency of the target video behind the frame image time sequence so as to enable the frame extraction frequency to be matched with the moving speed of the current target object.
When the target object moves out of the detection area, the frame extraction frequency returns to the first frame extraction frequency.
By calculating the moving speed of the target object in the detection area and changing the frame extraction frequency in the current time period by the moving speed, the frame extraction frequency is matched with the moving speed of the current target object, the timeliness of monitoring the target object is improved, and meanwhile, the purpose of saving detection resources is also considered.
In some real-time modes, a shooting device or a terminal arranged in a frame-extracting area has different image acquisition modes, for example, a plurality of cameras with different pixel values are arranged, different cameras can acquire target videos with different definitions, a camera with a lower pixel value is adopted in a normal detection environment, and a camera with a higher pixel value is used for shooting a target area after a target object image is detected. Referring to fig. 7, fig. 7 is a flowchart illustrating a process of adjusting the sharpness of a target video according to a detection classification result according to the present embodiment.
As shown in fig. 7, after the step S1433 shown in fig. 6, the method includes:
s1441, acquiring the video definition of the target video;
and when the frame image is detected to have the target object image, reading the video definition of the current shooting device or interrupting the acquisition of the target video.
S1442, comparing the video definition with a preset definition threshold;
and comparing the video definition with a preset definition threshold value. The definition threshold is a set definition value for judging whether the current shooting definition meets the purpose of monitoring the target object. The comparison method is to compare the video definition with the definition threshold.
S1443, when the video definition is smaller than the definition threshold, improving the definition of the target video behind the frame image time sequence.
And when the video definition is smaller than the definition threshold value according to the comparison result, the definition of the target video behind the frame image time sequence is improved, and the video definition is smaller than the definition threshold value, so that the current shooting definition of the camera is low, the requirement of tracking the target object is not met, and the requirement is improved. The lifting mode is as follows: and shooting the detection area by adopting a camera with higher definition. And when the comparison result shows that the video definition is greater than or equal to the definition threshold, normally shooting.
In a conventional detection form, a camera with lower definition is used for shooting, and when a target image is detected to appear, the camera with higher definition is used for shooting. The image data volume of the frame image with low definition is small, the processor pressure of target object detection processing is reduced, when the target object is detected to appear, a high-quality camera is adopted for shooting, the monitoring strength of the target object is enhanced, and the detection resource is further saved by adopting the mode.
To solve the above technical problem, an embodiment of the present invention further provides an image detection apparatus.
Referring to fig. 8, fig. 8 is a schematic view of a basic structure of the image detection apparatus according to the present embodiment.
As shown in fig. 8, an image detection apparatus includes: an obtaining module 2100, a searching module 2200, a processing module 2300, and an executing module 2400. The acquiring module 2100 is configured to acquire video data of a target video to be detected, where the video data includes acquisition time of the target video; the searching module 2200 is configured to search, in a preset policy data list, a frame extraction policy having a mapping relationship with the acquisition time, where the frame extraction policy is a first frame extraction frequency for extracting a frame image in the target video; the processing module 2300 is configured to perform frame extraction on the video data according to a frame extraction policy to obtain a frame image; the execution module 2400 is configured to perform preset target object detection processing on the frame image.
When the image detection device detects images, a frame extraction strategy corresponding to the acquisition time is searched in a strategy data list according to the acquisition time of the acquired target video, wherein the strategy data list is formed according to historical detection data, the probability of the target object appearing in a detection area in different time periods is recorded, and a corresponding first frame extraction frequency is set according to the probability. The frame extraction frequency is lower in a time period with lower probability of occurrence of the target object, otherwise, the frame extraction frequency is higher, so that the number of the frame images obtained in unit time is adjusted, the processor can relax corresponding calculation capability in different time periods, the loss of the processor is reduced, energy is saved, and the cost of image detection is effectively saved.
In some embodiments, the image detection apparatus further includes: the device comprises a first statistic submodule, a first processing submodule and a first execution submodule. The first statistic submodule is used for counting the occurrence times of target object images detected in a target video in each preset time period; the first processing submodule is used for calculating the occurrence probability of the target object image in the corresponding time period according to the occurrence times; the first execution submodule is used for calculating a frame extraction strategy of the target video in each preset time period according to the occurrence probability and writing the frame extraction strategy into a strategy data list.
In some embodiments, the image detection apparatus further includes: a second processing submodule and a second execution submodule. The second processing submodule is used for searching a time period which has a corresponding relation with the acquisition time in the strategy data list; and the second execution submodule is used for calling a frame extraction strategy which has a mapping relation with the time period as the frame extraction strategy of the target video.
In some embodiments, the image detection apparatus further includes: a third processing submodule and a third execution submodule. The third processing submodule is used for inputting the frame image into a preset image processing model, wherein the image processing model is a neural network model which is trained to a convergence state in advance and used for classifying whether a target object exists in the image or not; and the third execution submodule is used for reading the classification result output by the image processing model so as to confirm whether the preset target object image exists in the video data.
In some embodiments, the image detection apparatus further includes: the system comprises a fourth processing submodule, a first comparison submodule and a fourth execution submodule. The fourth processing submodule is used for calling a preset frequency threshold value when the frame image has the target object image; the first comparison sub-module is used for comparing a first frame extraction frequency of the target video with a frequency threshold; and the fourth execution sub-module is used for increasing the first frame extraction frequency of the target video after the frame picture image time sequence when the first frame extraction frequency is less than the frequency threshold.
In some embodiments, the image detection apparatus further includes: a fifth processing submodule, a first search submodule and a fifth execution submodule. The fifth processing submodule is used for calculating the moving speed of the target object in a preset detection monitoring area when any one of the frame images has the target object image; the first searching submodule is used for searching a second frame extracting frequency which has a mapping relation with the moving speed in a preset detection frequency data list; and the fifth execution sub-module is used for taking the second frame extraction frequency as the frame extraction frequency of the target video positioned after the frame picture image time sequence.
In some embodiments, the image detection apparatus further includes: the device comprises a first obtaining submodule, a second comparing submodule and a sixth processing submodule. The first obtaining submodule is used for obtaining the video definition of a target video; the second comparison submodule is used for comparing the video definition with a preset definition threshold; and the sixth processing submodule is used for improving the definition of the target video behind the time sequence of the frame image when the video definition is smaller than the definition threshold.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device. Referring to fig. 9, fig. 9 is a block diagram of a basic structure of a computer device according to the present embodiment.
As shown in fig. 9, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions, when executed by the processor, can enable the processor to implement an image detection method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have computer readable instructions stored therein, which when executed by the processor, cause the processor to perform a method of image detection. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of the obtaining module 2100, the searching module 2200, the processing module 2300 and the executing module 2400 in fig. 8, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all the sub-modules in the face image key point detection device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
When the computer equipment detects images, a frame extraction strategy corresponding to the acquisition time is searched in a strategy data list according to the acquisition time of the acquired target video, wherein the strategy data list is formed according to historical detection data, the probability of the target object appearing in a detection area in different time periods is recorded, and a corresponding first frame extraction frequency is set according to the probability. The frame extraction frequency is lower in a time period with lower probability of occurrence of the target object, otherwise, the frame extraction frequency is higher, so that the number of the frame images obtained in unit time is adjusted, the processor can relax corresponding calculation capability in different time periods, the loss of the processor is reduced, energy is saved, and the cost of image detection is effectively saved.
The present invention also provides a storage medium storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the steps of any of the embodiments of the image detection method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

Claims (10)

1. An image detection method, comprising:
acquiring video data of a target video to be detected, wherein the video data comprises the acquisition time of the target video;
searching a frame extracting strategy having a mapping relation with the acquisition time in a preset strategy data list, wherein the frame extracting strategy is a first frame extracting frequency for extracting a frame picture image in the target video;
performing frame extraction on the video data according to the frame extraction strategy to obtain a frame image;
and carrying out preset target object detection processing on the frame image.
2. The image detection method as claimed in claim 1, wherein said obtaining video data of the target video to be detected comprises:
counting the occurrence times of the detected target object images in the target video within each preset time period;
calculating the occurrence probability of the target object image in the corresponding time period according to the occurrence times;
and calculating the frame extraction strategy of the target video in each preset time period according to the occurrence probability, and writing the frame extraction strategy into a strategy data list.
3. The image detection method as claimed in claim 2, wherein the searching for the frame-extracting policy having a mapping relationship with the capturing time in the preset policy data list comprises:
searching a time period corresponding to the acquisition time in the strategy data list;
and calling a frame extraction strategy having a mapping relation with the time period as the frame extraction strategy of the target video.
4. The image detection method as claimed in claim 1, wherein the performing of the predetermined target object detection processing on the frame image comprises:
inputting the frame image into a preset image processing model, wherein the image processing model is a neural network model which is trained to a convergence state in advance and used for classifying whether a target object exists in the image or not;
and reading a classification result output by the image processing model to confirm whether a preset target object image exists in the video data.
5. The image detection method as claimed in claim 4, wherein said sequentially reading the classification result outputted by the image processing model to determine whether there is a preset target image in the video data comprises:
when the frame image has the target object image, calling a preset frequency threshold;
comparing the first frame extraction frequency of the target video with the frequency threshold;
and when the first frame extraction frequency is smaller than the frequency threshold, increasing the first frame extraction frequency of the target video after the frame image time sequence.
6. The image detection method as claimed in claim 4, wherein said sequentially reading the classification result outputted by the image processing model to determine whether there is a preset target image in the video data comprises:
when any one of the frame image images has the target object image, calculating the moving speed of the target object in a preset detection monitoring area;
searching a second frame extracting frequency which has a mapping relation with the moving speed in a preset detection frequency data list;
and taking the second frame extraction frequency as the frame extraction frequency of the target video after the frame picture image time sequence.
7. The image detection method as claimed in claim 6, wherein the video data further includes sharpness information of a target video, and the step of determining the second decimation frequency as the decimation frequency of the target video after the frame picture image timing comprises:
acquiring the video definition of the target video;
comparing the video definition with a preset definition threshold;
and when the video definition is smaller than the definition threshold value, improving the definition of the target video behind the frame image time sequence.
8. An image detection device, comprising:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring video data of a target video to be detected, and the video data comprises the acquisition time of the target video;
the searching module is used for searching a frame extracting strategy which has a mapping relation with the acquisition time in a preset strategy data list, wherein the frame extracting strategy is a first frame extracting frequency for extracting a frame image in the target video;
the processing module is used for extracting frames from the video data according to the frame extracting strategy to obtain a frame image;
and the execution module is used for carrying out preset target object detection processing on the frame picture image.
9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to carry out the steps of the image detection method according to any one of claims 1 to 7.
10. A storage medium having computer-readable instructions stored thereon which, when executed by one or more processors, cause the one or more processors to perform the steps of the image detection method as claimed in any one of claims 1 to 7.
CN201910104736.6A 2019-02-01 2019-02-01 Image detection method and device, computer equipment and storage medium Pending CN111523347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910104736.6A CN111523347A (en) 2019-02-01 2019-02-01 Image detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910104736.6A CN111523347A (en) 2019-02-01 2019-02-01 Image detection method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111523347A true CN111523347A (en) 2020-08-11

Family

ID=71908241

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910104736.6A Pending CN111523347A (en) 2019-02-01 2019-02-01 Image detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111523347A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112786163A (en) * 2020-12-31 2021-05-11 北京小白世纪网络科技有限公司 Ultrasonic image processing and displaying method and system and storage medium
CN113657338A (en) * 2021-08-25 2021-11-16 平安科技(深圳)有限公司 Transmission state identification method and device, computer equipment and storage medium
CN114245167A (en) * 2021-11-08 2022-03-25 浙江大华技术股份有限公司 Video storage method and device and computer readable storage medium
CN114679607A (en) * 2022-03-22 2022-06-28 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN115065798A (en) * 2022-08-18 2022-09-16 广州智算信息技术有限公司 Big data-based video analysis monitoring system
CN115278355A (en) * 2022-06-20 2022-11-01 北京字跳网络技术有限公司 Video editing method, device, equipment, computer readable storage medium and product
CN115761571A (en) * 2022-10-26 2023-03-07 北京百度网讯科技有限公司 Video-based target retrieval method, device, equipment and storage medium
CN116805433A (en) * 2023-06-27 2023-09-26 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN116958707A (en) * 2023-08-18 2023-10-27 武汉市万睿数字运营有限公司 Image classification method, device and related medium based on spherical machine monitoring equipment
CN117456097A (en) * 2023-10-30 2024-01-26 南通海赛未来数字科技有限公司 Three-dimensional model construction method and device
CN117596355A (en) * 2024-01-19 2024-02-23 安徽协创物联网技术有限公司 Camera assembly and mobile terminal equipment convenient to install on glass curtain wall
CN117456097B (en) * 2023-10-30 2024-05-14 南通海赛未来数字科技有限公司 Three-dimensional model construction method and device

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112786163A (en) * 2020-12-31 2021-05-11 北京小白世纪网络科技有限公司 Ultrasonic image processing and displaying method and system and storage medium
CN112786163B (en) * 2020-12-31 2023-10-24 北京小白世纪网络科技有限公司 Ultrasonic image processing display method, system and storage medium
CN113657338A (en) * 2021-08-25 2021-11-16 平安科技(深圳)有限公司 Transmission state identification method and device, computer equipment and storage medium
CN114245167A (en) * 2021-11-08 2022-03-25 浙江大华技术股份有限公司 Video storage method and device and computer readable storage medium
CN114679607A (en) * 2022-03-22 2022-06-28 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN114679607B (en) * 2022-03-22 2024-03-05 深圳云天励飞技术股份有限公司 Video frame rate control method and device, electronic equipment and storage medium
CN115278355B (en) * 2022-06-20 2024-02-13 北京字跳网络技术有限公司 Video editing method, device, equipment, computer readable storage medium and product
CN115278355A (en) * 2022-06-20 2022-11-01 北京字跳网络技术有限公司 Video editing method, device, equipment, computer readable storage medium and product
CN115065798A (en) * 2022-08-18 2022-09-16 广州智算信息技术有限公司 Big data-based video analysis monitoring system
CN115065798B (en) * 2022-08-18 2022-11-22 广州智算信息技术有限公司 Big data-based video analysis monitoring system
CN115761571A (en) * 2022-10-26 2023-03-07 北京百度网讯科技有限公司 Video-based target retrieval method, device, equipment and storage medium
CN116805433A (en) * 2023-06-27 2023-09-26 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN116805433B (en) * 2023-06-27 2024-02-13 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN116958707A (en) * 2023-08-18 2023-10-27 武汉市万睿数字运营有限公司 Image classification method, device and related medium based on spherical machine monitoring equipment
CN116958707B (en) * 2023-08-18 2024-04-23 武汉市万睿数字运营有限公司 Image classification method, device and related medium based on spherical machine monitoring equipment
CN117456097A (en) * 2023-10-30 2024-01-26 南通海赛未来数字科技有限公司 Three-dimensional model construction method and device
CN117456097B (en) * 2023-10-30 2024-05-14 南通海赛未来数字科技有限公司 Three-dimensional model construction method and device
CN117596355A (en) * 2024-01-19 2024-02-23 安徽协创物联网技术有限公司 Camera assembly and mobile terminal equipment convenient to install on glass curtain wall
CN117596355B (en) * 2024-01-19 2024-03-29 安徽协创物联网技术有限公司 Camera assembly and mobile terminal equipment convenient to install on glass curtain wall

Similar Documents

Publication Publication Date Title
CN111523347A (en) Image detection method and device, computer equipment and storage medium
US20190130188A1 (en) Object classification in a video analytics system
US10891481B2 (en) Automated detection of features and/or parameters within an ocean environment using image data
CN110909630B (en) Abnormal game video detection method and device
CN109871780B (en) Face quality judgment method and system and face identification method and system
CN111462155B (en) Motion detection method, device, computer equipment and storage medium
CN109902601B (en) Video target detection method combining convolutional network and recursive network
CN113139403A (en) Violation behavior identification method and device, computer equipment and storage medium
CN111488855A (en) Fatigue driving detection method, device, computer equipment and storage medium
US20230206093A1 (en) Music recommendation method and apparatus
CN114550053A (en) Traffic accident responsibility determination method, device, computer equipment and storage medium
CN110555120B (en) Picture compression control method, device, computer equipment and storage medium
CN115082752A (en) Target detection model training method, device, equipment and medium based on weak supervision
CN112422909A (en) Video behavior analysis management system based on artificial intelligence
CN110879990A (en) Method for predicting queuing waiting time of security check passenger in airport and application thereof
CN111507467A (en) Neural network model training method and device, computer equipment and storage medium
CN114416260A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113901931A (en) Knowledge distillation model-based behavior recognition method for infrared and visible light videos
CN111444913B (en) License plate real-time detection method based on edge guiding sparse attention mechanism
CN111178370B (en) Vehicle searching method and related device
CN110738129B (en) End-to-end video time sequence behavior detection method based on R-C3D network
CN111127355A (en) Method for finely complementing defective light flow graph and application thereof
CN111553408B (en) Automatic test method for video recognition software
CN115205779A (en) People number detection method based on crowd image template
CN113963310A (en) People flow detection method and device for bus station and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination