CN111817902B - Method and system for controlling bandwidth - Google Patents

Method and system for controlling bandwidth Download PDF

Info

Publication number
CN111817902B
CN111817902B CN202010906834.4A CN202010906834A CN111817902B CN 111817902 B CN111817902 B CN 111817902B CN 202010906834 A CN202010906834 A CN 202010906834A CN 111817902 B CN111817902 B CN 111817902B
Authority
CN
China
Prior art keywords
bandwidth
network
laser
characteristic
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010906834.4A
Other languages
Chinese (zh)
Other versions
CN111817902A (en
Inventor
卢国鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xingrong Shanghai Information Technology Co ltd
Original Assignee
Shanghai Xingrong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xingrong Information Technology Co ltd filed Critical Shanghai Xingrong Information Technology Co ltd
Priority to CN202010906834.4A priority Critical patent/CN111817902B/en
Publication of CN111817902A publication Critical patent/CN111817902A/en
Application granted granted Critical
Publication of CN111817902B publication Critical patent/CN111817902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Abstract

The embodiment of the application discloses a method and a system for controlling bandwidth, wherein the method comprises the following steps: acquiring the number features and network features of N time points in a scene where a bandwidth to be controlled is applied, wherein the number features comprise a first number feature and a second number feature, and N is an integer greater than 0; inputting the people number characteristics and the network characteristics of the N time points into a trained bandwidth control model, predicting a target bandwidth, and adjusting the bandwidth to be controlled to the target bandwidth. The method provided by the application can synthesize various information influencing the bandwidth, and adopts a self-defined machine learning model structure according to the characteristics of the information to obtain better operation efficiency and prediction effect.

Description

Method and system for controlling bandwidth
Technical Field
The present application relates to the field of communications, and in particular, to a method and system for controlling bandwidth.
Background
With the increase of public services, more and more public places are equipped with public networks. As the traffic in public places changes dynamically, the demand for network bandwidth also changes. Although the larger the bandwidth, the wider the network requirement is met, the more network resources are occupied at the same time. In other application scenarios, the problem of allocating reasonable bandwidth to dynamic people streams is also faced.
Therefore, in order to ensure that the bandwidth meets the requirement and does not occupy excessive network resources, a method and a system for controlling the bandwidth are urgently needed.
Disclosure of Invention
One aspect of the present application provides a method of controlling bandwidth, the method comprising: acquiring the number features and network features of N time points in a scene where a bandwidth to be controlled is applied, wherein the number features comprise a first number feature and a second number feature, and N is an integer greater than 0; inputting the people number characteristics and the network characteristics of the N time points into a trained bandwidth control model, predicting a target bandwidth, and adjusting the bandwidth to be controlled to the target bandwidth; wherein the bandwidth control model comprises a laser prediction layer, a video prediction layer and a fusion layer; the laser prediction layer acquires laser features based on the first human features of the N time points; the video prediction layer obtains video features based on the second people number features of the N time points; the fusion layer outputs a target bandwidth based on the laser characteristics, the video characteristics, and the network characteristics; the first personal features are obtained by performing one or more first processes, the first processes including: the laser radar transmits laser beams, and obtains characteristic data of a plurality of target objects based on signals reflected by the laser beams; the trained first-person recognition model determines the first-person features based on the feature data of the multiple target objects; the second population characteristic is obtained by performing one or more second processes, the second processes comprising: extracting frame data based on the video data; the trained second people number recognition model extracts at least one target object in the frame data, and determines the second people number characteristics based on the at least one target object.
Another aspect of the present application provides a system for controlling bandwidth, the system comprising: the system comprises a characteristic acquisition module, a characteristic acquisition module and a control module, wherein the characteristic acquisition module is used for acquiring the number characteristics and network characteristics of N time points in a scene where a bandwidth to be controlled is applied, the number characteristics comprise a first number characteristic and a second number characteristic, and N is an integer greater than 0; the target bandwidth prediction module is used for inputting the personnel number characteristics and the network characteristics of the N time points into a trained bandwidth control model, predicting a target bandwidth and adjusting the bandwidth to be controlled into the target bandwidth; wherein the bandwidth control model comprises a laser prediction layer, a video prediction layer and a fusion layer; the laser prediction layer acquires laser features based on the first human features of the N time points; the video prediction layer obtains video features based on the second people number features of the N time points; the fusion layer outputs a target bandwidth based on the laser characteristics, the video characteristics, and the network characteristics; the first personal features are obtained by performing one or more first processes, the first processes including: the laser radar transmits laser beams, and obtains characteristic data of a plurality of target objects based on signals reflected by the laser beams; the trained first-person recognition model determines the first-person features based on the feature data of the multiple target objects; the second population characteristic is obtained by performing one or more second processes, the second processes comprising: extracting frame data based on the video data; the trained second people number recognition model extracts at least one target object in the frame data, and determines the second people number characteristics based on the at least one target object.
Another aspect of the present application provides an apparatus for controlling bandwidth, comprising at least one storage medium and at least one processor; the at least one storage medium is configured to store computer instructions; the at least one processor is configured to execute the computer instructions to implement a method of controlling bandwidth.
Another aspect of the present application provides a computer-readable storage medium, wherein the storage medium stores computer instructions that, when executed by a processor, implement a method of controlling bandwidth.
Drawings
The present application will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a control bandwidth system according to some embodiments of the present application;
FIG. 2 is an exemplary flow chart of a method of controlling bandwidth, according to some embodiments of the present application;
FIG. 3 is an exemplary flow chart of a method of obtaining a first demographic characteristic and a second demographic characteristic, shown in accordance with some embodiments of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below. It is obvious that the drawings in the following description are only examples or embodiments of the application, from which the application can also be applied to other similar scenarios without inventive effort for a person skilled in the art. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "device", "unit" and/or "module" as used in this application is a method for distinguishing different components, elements, parts or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used herein to illustrate operations performed by systems according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of a control bandwidth system according to some embodiments of the present application. As shown in fig. 1, an application scenario to which the present application relates may include a first computing system 160 and/or a second computing system 130.
The second computer system 130 may be used to automatically control the bandwidth of the public network 170. For example, the public network 170 may be applied to public places such as airports, stations, shopping malls, etc. where the traffic of people varies greatly, and the network may be any type such as a local area network, an AP, etc. When the traffic of people in the public place changes (e.g., the number of people increases), the demand for the data transmission capability of the public network 170 also changes (e.g., increases), and the second computer system 130 can adjust the bandwidth of the public network 170 to the target bandwidth 140, so as to ensure that the public network 170 meets the data transmission demand and avoid wasting network resources.
The second computing system 130 may obtain the data 120, the data 120 including people characteristics and network characteristics. Data 120 may be obtained through a pre-trained model and terminal (lidar 110-1, video surveillance 110-2). The data 120 may enter the second computing system 130 in a variety of common ways. The target bandwidth 140 may be output by the model 132 in the second computing system 130. The second computing system 130 can allocate bandwidth from the backbone network 180 to the current public network 170 based on the output target bandwidth 140, i.e., adjust the bandwidth of the current public network 170 to the target bandwidth 140. The public network 170 may be any one or more of a number of wireless networks. The backbone network 180 may be any one or more of a wireless network and a wired network.
The parameters of the model 162 may be obtained by training. The first computing system 160 may obtain multiple sets of sample data 150, where each set of training samples includes people number features, network features, and corresponding target bandwidths, and the first computing system 160 updates parameters of the model 162 through the multiple sets of sample data 150 to obtain a trained model. The parameters of the model 132 are derived from the trained model 162. Wherein the parameters may be communicated in any common manner.
A model (e.g., model 132 or/and model 162) may refer to a collection of several methods performed based on a processing device. These methods may include a number of parameters. When executing the model, the parameters used may be preset or may be dynamically adjusted. Some parameters may be obtained by a trained method, and some parameters may be obtained during execution. For a detailed description of the model referred to in this application, reference is made to the relevant part of the application.
The first computing system 160 and the second computing system 130 may be the same or different. The first computing system 160 and the second computing system 130 refer to systems with computing capability, and may include various computers, such as a server and a personal computer, or may be computing platforms formed by connecting a plurality of computers in various structures.
Processing devices may be included in first computing system 160 and second computing system 130, and may execute program instructions. The Processing device may include various common general purpose Central Processing Units (CPUs), Graphics Processing Units (GPUs), microprocessors, application-specific integrated circuits (ASICs), or other types of integrated circuits.
First computing system 160 and second computing system 130 may include storage media that may store instructions and may also store data. The storage medium may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof.
The first computing system 160 and the second computing system 130 may also include a network for internal connections and connections with the outside. Terminals for input or output may also be included. The network may be any one or more of a wired network or a wireless network.
Further details regarding the population characteristics, network characteristics, and bandwidth control model are provided with reference to fig. 2-3 and will not be repeated herein.
In some embodiments, a feature acquisition module and a target bandwidth prediction module may be included in the system 100 (e.g., the first computing system 160 or the second computing system 130).
The characteristic obtaining module can be used for obtaining the number characteristics and the network characteristics of N time points in a scene where the bandwidth to be controlled is applied, wherein the number characteristics comprise a first number characteristic and a second number characteristic, and N is an integer greater than 0.
In some embodiments, the network characteristics include network condition characteristics and network demand characteristics, the network demand characteristics being obtained based on the event category.
For more details of the feature value determination module, reference may be made to step 210, which is not described herein again.
The target bandwidth prediction module can be used for inputting the number features and the network features of the N time points into the trained bandwidth control model, predicting the target bandwidth and adjusting the bandwidth to be controlled into the target bandwidth; the bandwidth control model comprises a laser prediction layer, a video prediction layer and a fusion layer; the laser prediction layer acquires laser characteristics based on the first personal characteristics of the N time points; the video prediction layer acquires video features based on the second people number features of the N time points; the fusion layer outputs a target bandwidth based on the laser characteristics, the video characteristics and the network characteristics; the first demographic characteristic is obtained by performing one or more first processes, the first processes comprising: the laser radar transmits laser beams, and obtains characteristic data of a plurality of target objects based on signals reflected by the laser beams; the trained first-person recognition model determines first-person features based on feature data of a plurality of target objects; the second population characteristic is obtained by performing one or more second processes, the second processes comprising: extracting frame data based on the video data; the trained second people number recognition model extracts at least one target object in the frame data, and determines second people number characteristics based on the at least one target object.
In some embodiments, the laser prediction layer, the video prediction layer, and the fusion layer are Deep Neural Networks (DNNs).
In some embodiments, the target bandwidth prediction module is further to: for each of the N time points, a first sampling rate model determines a first sampling rate based on a quality characteristic of a signal reflected by the laser beam, and a second sampling rate model determines a second sampling rate based on a quality characteristic of the video data; performing a first process a plurality of times based on the first sampling rate, performing a second process a plurality of times based on the second sampling rate; a first demographic characteristic is determined based on results of the plurality of first process executions and a second demographic characteristic is determined based on results of the plurality of second process executions. For more details, see step 220, which is not described herein.
Fig. 2 is an exemplary flow chart of a method of controlling bandwidth according to some embodiments of the present application.
In some embodiments, the second computing system 130 may define a plurality of processing layers through the neural network model, respectively process the plurality of features to obtain a plurality of processing results, then fuse the plurality of processing results, output a predicted target bandwidth, and further adjust the bandwidth to be controlled to the target bandwidth.
In some embodiments, the plurality of characteristics includes a people count characteristic, which can be obtained in a variety of ways. In some embodiments, the second computing system 130 may process with different processing layers for different manner of obtaining people characteristics.
In some embodiments, the second computing system 130 may also determine the number of times to acquire the plurality of features based on the time interval and frequency at which the plurality of features are acquired.
As shown in fig. 2, the control bandwidth method 200 may include:
step 210, obtaining the number characteristics and the network characteristics of N time points in a scene where the bandwidth to be controlled is applied, wherein the number characteristics comprise a first number characteristic and a second number characteristic, and N is an integer greater than 0. In particular, this step 210 may be performed by a feature acquisition module.
Bandwidth is the amount of data that can pass through a network link per unit time and is used to characterize the ability of the network link to transmit data. In some embodiments, the bandwidth is in units of bits per second (bps). It can be understood that the larger the bandwidth, the stronger the capability of transmitting data, and the more network resources are occupied.
A link refers to a physical channel of data, and a total link may be divided into a plurality of sub-links and backup links, etc. by a switch. The backup link refers to a link that does not transmit data temporarily, and the backup link can be used to transmit data when a sub-link fails. It can be understood that the total link corresponds to the backbone network, that is, the total bandwidth of the backbone network is the capability of the total link to transmit data; the sub-link corresponds to a local network (e.g., a public network), that is, the bandwidth of the local network is the capability of the sub-link to transmit data; the backup link corresponds to the backup network, i.e., the bandwidth of the backup network is the capacity of the backup link to transmit data. It follows that the sum of the bandwidths of the local networks and the standby network is equal to the total bandwidth of the backbone network.
The time point is a certain moment in time, and the N time points are N moments in time after the last control bandwidth. In some embodiments, the time points have an extremely short length of time, such as 1 second.
In some embodiments, the N time points may be spaced apart by the same time. For example, every 300 seconds is a time point. In some embodiments, the N time points may be spaced at different times based on the characteristics of the distribution of the flow of people over time. For example, the time point interval of the peak time period of the human traffic is 120 seconds, and the time point interval of the valley time period of the human traffic is 600 seconds. In some embodiments, the feature obtaining module may obtain the N time points by manually presetting the interval time. In some embodiments, the feature obtaining module may also automatically adjust the interval time to obtain N time points by an algorithm based on the characteristics of the number of people obtained at the previous time point. In some embodiments, the feature obtaining module may further obtain the N time points in other manners, which is not limited in this embodiment.
The bandwidth to be controlled refers to the bandwidth of the public network which needs to adjust the capacity of transmitting data. The scenes in which the bandwidth to be controlled is applied comprise public places with large people flow change, such as airports, stations, shopping malls, meeting rooms and the like. It can be understood that, as the traffic of people changes, the data transmission requirement also changes, and the bandwidth to be controlled needs to be controlled in real time, that is, based on the data transmission requirement of the public network change, the network management software is used to dynamically allocate bandwidth to the public network from the total bandwidth of the backbone network, so as to ensure that the target bandwidth after control meets the data transmission requirement, and simultaneously avoid wasting the network resources of the backbone network and occupying the bandwidths of other local networks and standby networks.
Illustratively, the total bandwidth of the backbone network is 100bps, and the current total bandwidth allocation is as follows: including the current 20bps of bandwidth to be controlled of the airport public network, the 30bps of bandwidth of the local network 1, the 25bps of bandwidth of the local network 2 and the 25bps of bandwidth of the standby network. If the traffic of people at the airport becomes smaller, the data transmission requirement of the public network at the airport becomes smaller, for example, only 5bps is needed, and the current bandwidth to be controlled of the public network at the airport wastes 15bps, so that the bandwidth to be controlled needs to be controlled to 5bps, that is, 15bps is allocated to the local network 1, the local network 2 and/or the standby network. If the traffic of people at the airport becomes large, the data transmission requirement of the public network at the airport becomes large, for example, 30bps is needed, and the current bandwidth to be controlled of the public network at the airport also needs 20bps, so that the bandwidth to be controlled needs to be controlled to 30bps, that is, 20bps of bandwidth is obtained from the local network 1, the local network 2 and/or the standby network.
The people number characteristic is the characteristic for representing the flow of people. In some embodiments, the demographic characteristics include a first demographic characteristic and a second demographic characteristic.
The first demographic is a demographic characteristic obtained by a laser radar. In some embodiments, the first person characteristic may be represented by a numerical value of the number of persons, e.g., 100. The laser radar is a radar system for detecting a target object by emitting a laser beam, and includes a laser emitting system, a laser receiving system, and an information processing system. The target objects are people and objects in the detection range of the laser radar and can reflect laser beams emitted by the laser radar. The laser beam is an optical signal emitted by the laser emitting system in real time.
In some embodiments, the lidar may be, but is not limited to: one or a combination of more of Mechanical lidar, solid state lidar, Micro-Electro-Mechanical systems (MEMS) lidar, Flash area Array lidar, Optical Phased Array (OPA) solid state lidar and hybrid solid state lidar.
In some embodiments, the first personal features are obtained by performing one or more first processes, the first processes comprising: the laser radar emits a laser beam and acquires characteristic data of a plurality of targets based on signals reflected by the laser beam, wherein the characteristic data of the plurality of targets at least comprises one or more of combination of distance, angle, shape and absorptivity; the trained first-person recognition model determines first-person features based on feature data of a plurality of target objects.
Specifically, in the laser emission system, the excitation source periodically drives the laser to emit laser pulses, and the laser modulator emits one or more laser beams to the target object by controlling the direction and the number of lines of the emitted laser beams through the beam controller. In some embodiments, the laser scanning System controls the direction of the laser beam by moving one or a combination of a laser emission System, a Micro-Electro-Mechanical System (MEMS) Micro-mirror, and an Optical Phased Array (OPA) to sequentially emit the laser beam to the target object in multiple regions. Such as mechanical lidar, solid state lidar, MEMS lidar and OPA solid state lidar. In some embodiments, the laser emitting system can also emit laser beams to the respective areas simultaneously to achieve light coverage. Such as Flash area array lidar.
The signal reflected by the laser beam includes a laser beam round trip time, a laser beam frequency change, a laser beam signal attenuation degree, a laser beam angle and the like. In some embodiments, the lidar may receive the laser beam via a laser receiving system and generate a reflected signal. Illustratively, the feature acquisition module acquires signals reflected by a laser beam of the lidar at N time points: signal emitted by the laser beam at the 1 st time point, signal emitted by the laser beam at the 2 nd time point … … signal emitted by the laser beam at the nth time point.
The characteristic data of the plurality of target objects is data for characterizing the plurality of target objects in the detection range of the laser radar. As previously mentioned, the characteristic data of the object includes at least a combination of one or more of distance, direction, shape and absorption rate. In some embodiments, the lidar may acquire, via the information processing system, characteristic data of the target object based on signals reflected by one or more laser beams. For example, the distance to the target object is determined based on the laser beam round trip time and the laser beam velocity. As another example, the direction of the target object is determined based on the laser beam angle and the distance of the target object. For another example, the shape of the target object may be determined based on a plurality of laser beam angles. Also for example, the target laser absorptance is determined based on the degree of attenuation of the laser beam signal and the distance of the target. Continuing with the above example, the information processing system of the laser radar acquires feature data of a plurality of targets at N time points based on signals reflected by the laser beam at the N time points: feature data of a plurality of objects at the 1 st time point, feature data of a plurality of objects at the 2 nd time point … … feature data of a plurality of objects at the nth time point.
As previously described, the first person identification model may determine the first person feature based on feature data of a plurality of objects. Specifically, the first person identification module may identify a person from the plurality of objects based on feature data of the plurality of objects, and finally determine the first person feature based on the number of identified persons.
In some embodiments, the first person identification model may comprise a rule-based model. Specifically, the first person number recognition model obtains a probability corresponding to the feature data of each object (that is, a probability that the object is a person obtained based on the feature data of the object) by comparing the feature data of each object with a known feature data range of each human body, obtains a probability that the object is a person by processing the probability corresponding to the feature data of each object, and recognizes the person in the object by comparing the probability that the object is a person with a threshold.
Illustratively, the characteristic data of the object includes a shape, an absorption rate, and a direction. Taking the feature data "shape" of the target object as an example, the range of the shape in the feature data of the known human body is: the height is 0.5-2.5 m, the width is 0.2-1 m, the shape of the target object is 1.8 m, the width is 0.2-0.5 m, and the corresponding probability of the shape is 1; similarly to the "shape", the probabilities corresponding to the "absorptance" and the "direction" are 0.7 and 0.8, respectively; multiplying 1, 0.7 and 0.8 to obtain the probability that the target object is a human being, wherein the probability is 0.56 and is greater than the threshold value of 0.5, and the target object is a human being.
In some embodiments, the first person identification model may also include a neural network model. Specifically, the first person identification model maps the input feature data of the target object into a probability, and then determines whether the target object is a person based on the probability.
In some embodiments, the first person identification model may also be another model, and the embodiment is not limited.
Further, the first person recognition model determines a first person feature based on the recognized persons in the plurality of objects. Illustratively, the first person recognition model recognizes 250 persons in the plurality of objects based on the feature data of the plurality of objects at the 1 st time point, and then the 1 st time point first person feature is 250, and similarly, the first person recognition model outputs the 2 nd time point first person feature … … and the nth time point first person feature.
In some embodiments, the first person recognition model may be trained based on a number of training samples with identifications. Specifically, a training sample with an identifier is input into a first personal identification model, and parameters of the first personal identification model are updated through training.
In some embodiments, the training sample may be target feature data, specifically referring to a description of the laser radar acquiring the target feature data based on the signal reflected by the laser beam. In some embodiments, the identification may be a first-person feature. In some embodiments, the first-person feature may be obtained by manual annotation. In some embodiments, training may be performed by a commonly used method based on the training samples.
It is to be understood that a first person feature may be obtained by performing a first process once per point in time. In some embodiments, performing the first process a plurality of times may obtain a plurality of results of performing the first process, and obtaining a first person characteristic based on the plurality of results. For a detailed description of the first-person feature obtained by performing the first process multiple times, refer to fig. 3, which is not described herein again.
The second demographic characteristic is a demographic characteristic obtained through the video data. In some embodiments, the second people characteristic may be represented by a numerical value of people, for example, 200.
Video data is a moving image recorded as an electrical signal and composed of a plurality of temporally successive still images. Wherein each still image is a frame of video data. In some embodiments, video data for a point in time may contain multiple still images.
In some embodiments, the format of the video data may be, but is not limited to: one or more combinations of Digital Video Disks (DVDs), streaming Media formats (Flash videos, FLV), Motion Picture Experts Group (MPEG), Audio Video Interleaved (AVI), Video Home Systems (VHS), and Video container file formats (RM). In some embodiments, the feature capture module may capture video data by reading data from a monitor, camera, invoking an associated interface, or otherwise.
Illustratively, the feature acquisition module acquires N video data at N time points, respectively: 1 st time point video data, 2 nd time point video data … … nth time point video data.
As previously mentioned, the second demographic may be obtained by performing one or more second processes, including: extracting frame data based on the video data; the trained second people number recognition model extracts at least one target object in the frame data, and determines the second people number characteristics based on the at least one target object. The frame data is a still image extracted from the video data. Continuing with the above example, frame data for N time points may be extracted based on video data for N time points: 1 st time point frame data, 2 nd time point frame data … … nth time point frame data.
Specifically, the trained second person number recognition model may determine whether the target object in the frame data is an image of a person, and determine the second person number feature based on the determination result.
The second people recognition model can include an extraction layer, a feature acquisition layer, a recognition layer, and a statistics layer.
The target object is an image block corresponding to a person or object in the frame data. In some embodiments, the extraction layer may extract the plurality of target objects from the frame data by a Sliding-window (multi-scale), Selective Search (Selective Search), neural network, or other method.
For example, frame data is a static image of 200 × 200 pixels, and the static image passes through a sliding window of 10 × 10 pixels and slides by step 1 to obtain 190 × 190 image blocks from the frame data; then through a sliding window of 20 × 20 pixels, sliding is performed with step size 1, and 180 × 180 image blocks … … are obtained from the frame data, and finally 190 × 190+180 × 180+ … … image blocks are obtained. Wherein the obtained image blocks may have target objects therein. It is understood that if a person or an object exists in one frame data, a plurality of target objects may be determined from a plurality of image blocks obtained based on the frame data. For example, a plurality of target objects may be determined from a plurality of image blocks by an image recognition technique (e.g., an image recognition model). Wherein, the scale, step length and/or cutting number of the sliding window of the extraction layer can be preset parameters.
For example, the frame data may be directly cut, and an image portion corresponding to a person or an object included in the frame data may be cut as a target object.
In some embodiments, the feature acquisition layer may acquire a feature vector for each target object. Specifically, the feature acquisition layer acquires a plurality of image features of the target object, and then fuses the plurality of image features to obtain a feature vector of the target object.
In some embodiments, the image features of the target object include, but are not limited to: haar (Harr) features, Histogram of Oriented Gradients (HOG) features, Local Binary Patterns (LBP) features, edge-small (Edgelet) features, Color-Similarity (CSS) features, Integral channel features, and center Transform histograms (CENTRIST) features, among others.
In some embodiments, the feature acquisition layer may be a neural network model. Preferably, the feature acquisition layer is a Convolutional Neural Network (CNN). The number of layers of the neural network model and the number of neurons in each layer can be preset, and the matrix parameters of each neuron can be obtained through training.
In some embodiments, the recognition layer may determine whether the target object is an image containing a complete person based on the feature vector of each target object. In some embodiments, the identification layer includes a classification model. Specifically, the input feature vector of each target object is mapped to a numerical value or probability, and whether the target object is an image of a person is determined based on the numerical value or probability. The parameters of the mapping function in the classification model may be obtained by training.
In some embodiments, the classification model may be, but is not limited to, a support vector machine model, a Logistic regression model, a naive bayes classification model, a gaussian distributed bayes classification model, a decision tree model, a random forest model, a KNN classification model, a neural network model, or the like.
In some embodiments, the statistical layer determines the second demographic based on the determination of the at least one target object. Illustratively, the second person number recognition model recognizes 300 target objects as persons among 400 target images recognized in the 1 st time point frame data, and the 1 st time point second person number feature is 300, and similarly, the second person number recognition model outputs the 2 nd time point second person number feature … … the nth time point second person number feature.
In some embodiments, the second person number recognition model may be trained based on a number of training samples with identifications. Specifically, a training sample with the identification is input into the second people number recognition model, and the parameters of the second people number recognition model are updated through training.
In some embodiments, the training samples may be frame data, see in particular the associated description of extracting frame data from the video data. In some embodiments, the identification may be a second demographic. In some embodiments, the second demographic characteristic may be obtained through manual annotation. In some embodiments, training may be performed by a commonly used method based on the training samples.
It is to be appreciated that a second population characteristic may be obtained by performing a second process once per point in time. In some embodiments, performing the second process a plurality of times may obtain a plurality of results of performing the second process and obtain a second demographic based on the plurality of results. For a detailed description of performing the second process a plurality of times to obtain the second population characteristic, refer to fig. 3, which is not repeated herein.
Network characteristics are characteristics that characterize the current network conditions and potential network requirements. In some embodiments, the network characteristics include network condition characteristics and network demand characteristics.
Network condition features are data that characterize the current network condition. In some embodiments, the network condition characteristics include bandwidth, bandwidth utilization, traffic type, and communication mode, among others.
Bandwidth utilization is the ratio of the actual amount of data transmitted by a network link to the capacity to transmit the amount of data. The service type is a data exchange mode between terminals, for example, xDSL, PON service, etc. In some embodiments, the traffic type may be represented by a corresponding code or identification. For example, the xDSL is 1, and the PON service is 2 … … communication mode, which is a data transmission and reception mode, such as a full-duplex mode and a half-duplex mode. In some embodiments, the service type may also be represented by a corresponding code or identification. For example, the time division duplex mode is 1, and the frequency division duplex mode is 2.
In some embodiments, the network condition features may be represented by a vector composed of feature data, and the elements in the vector are data of each feature. Illustratively, the data of the network condition characteristics includes a bandwidth of 20bps, a bandwidth utilization of 80%, a traffic type "xDSL" (identified as 1) and a communication mode "full duplex mode" (identified as 2), and then the network condition characteristics can be represented by a vector (20, 80%, 1, 2), or the vector is normalized, and values in the vector are normalized to be within a preset numerical range.
In some embodiments, the feature acquisition module may acquire the network condition feature by reading test data and related information storage data of the current network, invoking a related interface, or other means.
An event refers to an objective fact that affects the transmission requirements of a data volume. The network demand characteristics may be characterized by event types including holiday peaks, emergencies, normal events, and the like. In some embodiments, the network demand characteristics may be represented by a code or identification that reflects the event type. For example, a normal event is 1, an emergency event is 2, a holiday is 3, and the like.
In some embodiments, the feature capture module may capture the event type by manual entry, reading stored data, invoking a correlation interface, or other means.
And step 220, inputting the number features and the network features of the N time points into the trained bandwidth control model, predicting the target bandwidth, and adjusting the bandwidth to be controlled to the target bandwidth. In particular, this step 220 may be performed by the target bandwidth prediction module.
The input of the bandwidth control model is the people number characteristic and the network characteristic, and the output is the target bandwidth. In some embodiments, before the people number characteristics and the network demand characteristics in the network characteristics are input into the model, the characteristic values can be subjected to bucket classification, and the characteristics can be represented in a vector mode. In some embodiments, the bandwidth control model may be a neural network model. The neural network model may include a plurality of processing layers, each processing layer consisting of a plurality of neurons, each neuron matrixing data. The parameters used by the matrix are obtained by training.
The bandwidth control model may be any existing model that enables processing of multiple features, e.g., CNN, DNN, etc. The bandwidth control model may also be a custom-made model according to requirements.
Illustratively, as shown in fig. 2, the bandwidth control model-customized structure may include a laser prediction layer, a video prediction layer, and a fusion layer.
In some embodiments, the laser prediction layer obtains laser features based on the first-person features for the N time points. The laser characteristics are vectors representing the number of people acquired by the laser radar. In some embodiments, the elements in the vector are determined based on a rate of change of the first-person feature between two points in time. For example, based on the ratio of the change in the first-person feature between two time points to the interval between the two time points. Specifically, the laser prediction layer may map the input first-person features of the N time points to laser features. In some embodiments, the laser prediction layer may be a Deep Neural Network (DNN). In some embodiments, the parameters of the laser prediction layer may be obtained by training. Illustratively, the initial laser prediction layer directly takes the first human number feature change rate between two time points as an element of the output laser feature vector, e.g., X = (-0.1 ). The trained laser prediction layer adds a weight to each change rate, for example, the first human number characteristic change rate between two time points in the future is weighted higher and is respectively set to 1 and 2, and the laser characteristic X = (-0.1, -0.2) is obtained. The training process refers to the training process of the bandwidth control model, and is not described in detail here.
In some embodiments, the video prediction layer obtains the video features based on the second demographic characteristics for the N time points. The video features are vectors representing the trend of people based on video acquisition. In some embodiments, the elements in the vector are determined based on a rate of change of the second people features between the two points in time. For example, based on a ratio of a change in the second population characteristic between the two time points to the interval between the two time points. Specifically, the video prediction layer may map the second demographic characteristic of the input N time points to the video characteristic. In some embodiments, the video prediction layer may be a Deep Neural Network (DNN).
Similar to the laser prediction layer, the parameters of the video prediction layer can also be obtained by training. Refer to the laser prediction layer in detail, and are not described herein.
In some embodiments, the fusion layer outputs the target bandwidth based on the laser characteristics, the video characteristics, and the network characteristics.
Specifically, the fusion layer may fuse the input laser feature, video feature, and network feature into a vector, and then map the vector into a numerical value, i.e., a predicted target bandwidth. Further, the bandwidth control model adjusts the bandwidth to be controlled to the predicted target bandwidth. The bandwidth adjustment can be realized by common network management software or other manual command modes.
In some embodiments, the fusion layer may be a Deep Neural Network (DNN). Preferably, the fusion layer is a two-layer deep neural network. The two layers of deep neural networks can better fuse laser characteristics, video characteristics and network characteristics, and under-fitting caused by a single layer of neural network and over-fitting caused by a plurality of layers of neural networks are avoided; meanwhile, a complex calculation process can be avoided, so that the prediction efficiency of the target bandwidth is improved, and the dynamic real-time control of the bandwidth is realized. The parameters of the fusion layer can be obtained through training, the parameters of the fusion layer are a matrix of neurons in the neural network, and elements in the matrix can comprise weights corresponding to different features and parameters of a mapping function.
Illustratively, in an airport public network application scenario, the bandwidth to be controlled after the last bandwidth control is 20bps, N =3 is taken, and the interval between 3 time points is 100 seconds. At the 1 st time point 100 seconds after the bandwidth is controlled last time, the laser radar emits laser beams, then characteristic data of a plurality of target objects are obtained based on signals reflected by the received laser beams, a first person identification model identifies persons from the plurality of target objects based on the characteristic data of the target objects, and then the first person characteristic is determined to be 100 based on the number of the identified persons; and simultaneously extracting frame data from the video data acquired at the 1 st time point, extracting at least one target object in the frame data by using a second people number recognition model, judging whether the target object is a person image, and determining that the second people number is 120 based on the judgment result of the at least one target object. Similarly, the feature obtaining module obtains the first person feature 90 and the second person feature 80 at the 2 nd time point 200 seconds after the last control bandwidth, and the first person feature 80 and the second person feature 80 at the 3 rd time point 300 seconds after the last control bandwidth, respectively. Finally, the first population characteristics 100, 90 and 80 and the second population characteristics 120, 80 and 80 at 3 time points are obtained. Network condition characteristics, e.g., Z = (20, 80%, 1, 2), and network demand characteristics, e.g., 2, of the bandwidth to be controlled are obtained simultaneously.
Then inputting the 3 first person features, the 3 second person features, the network condition features and the network demand features into a bandwidth control model, wherein a laser prediction layer of the bandwidth control model acquires laser features based on the 3 first person features (100, 90 and 80), for example, X = (-0.1, -0.2); the video prediction layer acquires a video feature, for example, Y = (-0.4, 0), based on the 3 second demographics (120, 80, and 80).
Further, the fusion layer fuses the laser characteristics X = (-0.1, -0.2), the video characteristics Y = (-0.4, 0), the network condition characteristics (20, 80%, 1, 2), and the network requirement characteristics 2 into one vector. Illustratively, the element alignment in the laser feature and the video feature is averaged to obtain a vector (-0.25, -0.1), the element alignment in the laser feature and the video feature is weighted and averaged to obtain a variation trend of (-0.25 × 0.2) + (-0.1 × 0.8) = -0.105 of the next population, and then the variation trend of-0.105 of the population, the network condition feature (20, 80%, 1, 2) and the network requirement feature 2 are spliced into the vector (-0.105, 20, 80%, 1, 2, 2). Further, the fusion layer maps the vector to a predicted target bandwidth of 25 bps. Finally, the bandwidth control model adjusts the bandwidth to be controlled to 20bps to the predicted target bandwidth to 25 bps.
In some embodiments, the laser prediction layer, the video prediction layer, and the fusion layer in the bandwidth control model may be jointly trained based on a large number of training samples with identifications.
In some embodiments, the training samples may be pre-known population characteristics and network characteristics. Wherein, the manner of acquiring the people number characteristics and the network characteristics is shown in step 210. In some embodiments, the actual bandwidth data that can satisfy the amount of transmission data and the bandwidth utilization corresponding to the training samples is identified. In some embodiments, the identification may be obtained by the flow monitoring software, or by other conventional methods.
Specifically, a training sample with an identifier is input into a bandwidth control model, and parameters of a laser prediction layer, a video prediction layer and a fusion layer are updated simultaneously through training.
Various factors may affect the bandwidth requirements during the bandwidth prediction and adjustment process. The most obvious influencing factor is the number of people, but the number of people in real time cannot actually reflect the bandwidth requirement accurately, because the behavior of people is different under different scenes. For example, when the public place is a shopping mall, and many people are shopping for goods, or many people are leaving, the demands for using the network may be very different. It can be understood that if the bandwidth is fixed to the bandwidth required by the public network when the traffic of people is maximum, although the requirement of the network for transmitting data volume is met, in the case of less traffic of people, the total bandwidth of the backbone network is occupied by the excessive bandwidth of the public network. Therefore, estimating the data transmission demand according to the people flow situation over a period of time can improve the accuracy of the predicted target bandwidth. The accuracy of people stream situation statistics in a single mode is insufficient, so that people stream situation acquisition in a mode of laser radar and video identification is more advantageous. Further, other information such as various network characteristics may also assist in the prediction. In some embodiments of the present application, the functions of multiple information (e.g., multiple characteristics) are combined, which is beneficial to improving the accuracy of bandwidth prediction and bandwidth control.
Since these information are mixed, it is difficult to establish a clear rule to obtain a predicted result from each type of information. By means of machine learning, a predictable model can be formed through automatic data learning, and high accuracy is obtained.
On the other hand, due to the fact that the related information features are more, the adoption of various standard machine learning models can cause the problems that the model parameter quantity is too much, the requirement on the training data quantity is high, overfitting is easy to happen and the like. In some embodiments of the application, a user-defined model is adopted, features of a plurality of time points are integrated through a user-defined layer based on a neural network, prediction is carried out through a fusion layer in combination with network features, and then the bandwidth to be controlled of a public network is adjusted to be the predicted target bandwidth. Compared with the mode of applying machine learning models of various standards, the scheme provided by the application can better adapt to the characteristics of the used information and the problem to be solved, and the problems of low operation efficiency, overlarge training data requirement or overfitting caused by excessive model parameters are avoided.
To sum up, aiming at the problems of network bandwidth prediction and control in a complex scene, the scheme provided by the application can more fully acquire data and obtain information helpful for prediction, and a self-defined machine learning model structure is adopted according to the characteristics of the information so as to obtain better operation efficiency and prediction effect.
FIG. 3 is an exemplary flow chart of a method of obtaining a first demographic characteristic and a second demographic characteristic, shown in accordance with some embodiments of the present application.
As described above, in some embodiments, the target bandwidth prediction module executes the first process and the second process a plurality of times, and then obtains the first demographic characteristic and the second demographic characteristic based on the results of the plurality of first process executions and the results of the plurality of second process executions, respectively.
As shown in fig. 3, a method 300 of performing a first process and a second process multiple times to obtain a first demographic and a second demographic may include:
for each of the N time points, a first sampling rate model determines a first sampling rate based on a quality characteristic of a signal reflected by the laser beam and a second sampling rate model determines a second sampling rate based on a quality characteristic of the video data, step 310.
The input to the first sample rate model is a quality characteristic of the signal reflected by the laser beam and the output is the first sample rate.
Wherein the quality characteristic of the signal reflected by the laser beam is an index for evaluating the quality of the signal reflected by the laser beam. In some embodiments, the quality characteristics of the signal reflected by the laser beam include beam quality factor, power stability, and the like. The beam quality factor is an index for representing the quality of the laser beam, and the closer the beam quality factor is to 1, the better the beam quality is. Power stability is an indicator of the instability of the laser beam output power over time, including RMS stability and peak-to-peak stability. It is understood that the smaller the values of RMS stability and peak-to-peak stability, the better the power stability.
In some embodiments, the quality characteristic of the signal reflected by the laser beam may be obtained by reading data from the signal reflected by the laser beam, invoking an associated interface, or otherwise.
The first sampling rate is the frequency at which the results of the first process execution are obtained at each point in time. It will be appreciated that the first sampling rate is the number of times the first process is performed per point in time.
Specifically, the first sampling rate model maps the quality characteristic of the signal reflected by the laser beam to a probability or a numerical value, and then outputs a corresponding first sampling rate based on the probability or the numerical value. It can be understood that the worse the quality characteristic of the signal reflected by the laser beam, the greater the first sampling rate, the more times the first process is performed, the greater the number of results obtained from performing the first process, so as to compensate for the information loss caused by the poor quality target characteristic data.
In some embodiments, the first sample rate model may be a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, and a Long Short Term Memory Network (LSTM) model.
As shown in fig. 3, taking the ith time point of the N time points as an example, the first sampling rate model obtains the first sampling rate of the ith time point as m based on the quality characteristic of the signal reflected by the laser beam at the ith time point.
In some embodiments, the first sample rate model may be trained based on a number of training samples with identifications. Specifically, training samples with identifications are input into a first sampling rate model, and parameters of the first sampling rate model are updated through training.
In some embodiments, the training sample may be a quality characteristic of the signal reflected by the laser beam, with particular reference to the associated description of the quality characteristic of the signal reflected by the laser beam. In some embodiments, the indication may be a first sampling rate at which an accurate first-person feature can be obtained based on a signal reflected by the laser beam. In some embodiments, the obtaining manner of the identifier may be: and taking the first sampling rate corresponding to the first characteristic which meets the requirement in the actual operation as the identifier.
In some embodiments, training may be performed by a commonly used method based on the training samples.
The input to the second sample rate model is a quality characteristic of the video data and the output is the second sample rate.
The quality characteristics of video data are indicators for evaluating the quality of the video data. In some embodiments, the quality characteristics of the video data include at least a combination of one or more of resolution, color depth, and image distortion. Resolution is an indicator of the ability to evaluate the detail presented by video data, and is typically characterized by a unit area of pixels. It will be appreciated that the greater the pixel density, the higher the resolution, and the better the quality of the video data. Color depth is an indicator for evaluating the maximum number of colors of a color image or the maximum gray level of a gray image, and is generally characterized by the number of code bits used to store information for each pixel. It is understood that the larger the number of code bits, the larger the color depth, and the better the quality of the video data. The image distortion is an index for evaluating the simulation degree of the video image and the target object, and comprises geometric distortion, signal-to-noise ratio, dynamic range, color restoration and the like. Where the signal-to-noise ratio is the ratio of the power spectrum of the signal to the noise. It can be understood that the larger the signal-to-noise ratio, the smaller the degree of image distortion, and the better the quality of the video data. In some embodiments, the quality characteristics of the video data may be obtained by reading relevant parameters of the video data, invoking a relevant interface, or otherwise.
Similarly to the first sampling rate, the second sampling rate is a frequency at which the result of the execution of the second process is acquired at each time point. It will be appreciated that the second sampling rate is the number of times the second process is performed per point in time.
Specifically, the second sampling rate model maps the quality characteristics of the video data to a probability or a numerical value, and outputs a corresponding second sampling rate based on the probability or the numerical value. It can be understood that the worse the quality characteristic of the video data is, the larger the second sampling rate is, the more times the second process is performed is, the greater the number of obtained results of performing the second process is, so as to make up for the information loss caused by the video data with poor quality.
In some embodiments, the second sample rate model may be a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, and a Long Short Term Memory Network (LSTM) model.
As shown in fig. 3, continuing to take the ith time point of the N time points as an example, the second sampling rate model obtains the second sampling rate N at the ith time point based on the quality characteristic of the signal reflected by the laser beam at the ith time point. In some embodiments, the second sample rate model may be trained based on a number of training samples with identifications. Specifically, the training samples with the identifications are input into a second sampling rate model, and the parameters of the second sampling rate model are updated through training.
In some embodiments, the training samples may be quality features of the video data, specifically referring to the related description of the quality features of the video data. In some embodiments, the identifying may be based on the video features enabling acquisition of a second sampling rate corresponding to an accurate second population of features. In some embodiments, the obtaining manner of the identifier may be: and taking a second sampling rate corresponding to the second characteristic which meets the requirement in the actual operation as the identifier.
In some embodiments, training may be performed by a commonly used method based on the training samples.
In step 320, a first process is performed a plurality of times based on the first sampling rate and a second process is performed a plurality of times based on the second sampling rate.
As described above, the first sampling rate is the number of times the first process is performed per time point, that is, the number of results obtained per time point to perform the first process. As shown in fig. 3, if the first sampling rate at the ith time point is m, m results are obtained: ith time Point result 1 … … ith time Point result m. For example, if the first sampling rate m =3 at the ith time point, the first process is performed 3 times at the ith time point (e.g. within 1 second), and 3 results of performing the first process are obtained: 100. 120 and 110. For a description of performing the first process, refer to fig. 2, which is not described herein again.
Similarly to the first sampling rate, the second sampling rate is the number of times the second process is performed per time point, i.e., the number of results obtained per time point to perform the second process. As shown in fig. 3, if the second sampling rate at the ith time point is n, n results are obtained: ith time Point result 1 … … ith time Point result n. For example, if the second sampling rate n =5 at the ith time point, 5 times of the second process are performed at the ith time point (for example, within 1 second), and 5 results of performing the second process are obtained: 180. 100, 90, 110 and 120. For a description of performing the second process, reference may be made to fig. 2, which is not described herein again.
At step 330, a first demographic characteristic is determined based on a plurality of results of the first process execution and a second demographic characteristic is determined based on a plurality of results of the second process execution.
In some embodiments, results of a plurality of first process executions and results of a plurality of second process executions are processed separately to obtain a first demographic characteristic and a second demographic characteristic. The processing method may be a common method such as an average calculation, a variance calculation, a summation, a weighted summation, or the like.
Continuing with the above example, the results of 3 first process executions at the ith time point are calculated: 100. 120 and 110 to obtain a first personal characteristic of 110 at the ith time point; calculate the results of 5 second process executions at the ith time point: 80. 100, 90, 110, and 120, resulting in a second population characteristic of 100 at the ith time point.
On one hand, the sampling rate of the number of people for obtaining the characteristics is determined according to the quality of the characteristic data of the target object and the quality of the video data, when the quality is poor and the data information is lost more, the sampling rate can be improved, and the number of people for obtaining the accurate number of people characteristics is increased; when the quality is better and the data information is complete, the sampling rate can be reduced, and the efficiency of acquiring the number of people is improved. On the other hand, the first sampling rate model and the second sampling rate model are neural network models trained by using practice data, so that the accuracy and the adaptability of the sampling rate can be improved.
The embodiment of the application also provides a computer readable storage medium. The storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer realizes the attention-based multi-scenario competitiveness calculation method.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered merely illustrative and not restrictive of the broad application. Various modifications, improvements and adaptations to the present application may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present application and thus fall within the spirit and scope of the exemplary embodiments of the present application.
Also, the present application uses specific words to describe the embodiments of the application. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the present application is included in at least one embodiment of the present application. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this application are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the present application may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran2003, Perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or processing device. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing processing device or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the present application. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the present application can be viewed as being consistent with the teachings of the present application. Accordingly, the embodiments of the present application are not limited to only those embodiments explicitly described and depicted herein.

Claims (10)

1. A method of controlling bandwidth, comprising:
acquiring the number features and network features of N time points in a scene where a bandwidth to be controlled is applied, wherein the number features comprise a first number feature and a second number feature, and N is an integer greater than 0;
inputting the people number characteristics and the network characteristics of the N time points into a trained bandwidth control model, predicting a target bandwidth, and adjusting the bandwidth to be controlled to the target bandwidth; wherein the content of the first and second substances,
the bandwidth control model comprises a laser prediction layer, a video prediction layer and a fusion layer;
the laser prediction layer acquires laser features based on the first human features of the N time points;
the video prediction layer obtains video features based on the second people number features of the N time points;
the fusion layer outputs a target bandwidth based on the laser characteristics, the video characteristics, and the network characteristics;
the first personal features are obtained by performing one or more first processes, the first processes including:
the laser radar transmits laser beams, and obtains characteristic data of a plurality of target objects based on signals reflected by the laser beams;
the trained first-person recognition model determines the first-person features based on the feature data of the multiple target objects;
the second population characteristic is obtained by performing one or more second processes, the second processes comprising:
extracting frame data based on the video data;
the trained second people number recognition model extracts at least one target object in the frame data, and determines the second people number characteristics based on the at least one target object.
2. The method of claim 1, the performing a first process a plurality of times and the performing a second process a plurality of times comprising:
for each of the N time points, a first sampling rate model determines a first sampling rate based on a quality characteristic of a signal reflected by the laser beam, and a second sampling rate model determines a second sampling rate based on a quality characteristic of the video data;
performing the plurality of first processes based on the first sampling rate, performing the plurality of second processes based on the second sampling rate;
a first demographic characteristic is determined based on results of the plurality of first process executions and a second demographic characteristic is determined based on results of the plurality of second process executions.
3. The method of claim 1, the laser prediction layer, the video prediction layer, and the fusion layer being a deep neural network.
4. The method of claim 1, the network characteristics comprising network condition characteristics and network demand characteristics, the network demand characteristics being obtained based on event categories.
5. A system for controlling bandwidth, comprising:
the system comprises a characteristic acquisition module, a characteristic acquisition module and a control module, wherein the characteristic acquisition module is used for acquiring the number characteristics and network characteristics of N time points in a scene where a bandwidth to be controlled is applied, the number characteristics comprise a first number characteristic and a second number characteristic, and N is an integer greater than 0;
the target bandwidth prediction module is used for inputting the personnel number characteristics and the network characteristics of the N time points into a trained bandwidth control model, predicting a target bandwidth and adjusting the bandwidth to be controlled into the target bandwidth; wherein the content of the first and second substances,
the bandwidth control model comprises a laser prediction layer, a video prediction layer and a fusion layer;
the laser prediction layer acquires laser features based on the first human features of the N time points;
the video prediction layer obtains video features based on the second people number features of the N time points;
the fusion layer outputs a target bandwidth based on the laser characteristics, the video characteristics, and the network characteristics;
the first personal features are obtained by performing one or more first processes, the first processes including:
the laser radar transmits laser beams, and obtains characteristic data of a plurality of target objects based on signals reflected by the laser beams;
the trained first-person recognition model determines the first-person features based on the feature data of the multiple target objects;
the second population characteristic is obtained by performing one or more second processes, the second processes comprising:
extracting frame data based on the video data;
the trained second people number recognition model extracts at least one target object in the frame data, and determines the second people number characteristics based on the at least one target object.
6. The system of claim 5, the target bandwidth prediction module to:
for each of the N time points, a first sampling rate model determines a first sampling rate based on a quality characteristic of a signal reflected by the laser beam, and a second sampling rate model determines a second sampling rate based on a quality characteristic of the video data;
performing the plurality of first processes based on the first sampling rate, performing the plurality of second processes based on the second sampling rate;
a first demographic characteristic is determined based on results of the plurality of first process executions and a second demographic characteristic is determined based on results of the plurality of second process executions.
7. The system of claim 5, the laser prediction layer, the video prediction layer, and the fusion layer being a deep neural network.
8. The system of claim 5, the network characteristics including network condition characteristics and network demand characteristics, the network demand characteristics being obtained based on event categories.
9. An apparatus for controlling bandwidth, comprising at least one storage medium and at least one processor, the at least one storage medium for storing computer instructions; the at least one processor is configured to execute the computer instructions to implement the method of any of claims 1-4.
10. A computer-readable storage medium, wherein the storage medium stores computer instructions, which when executed by a processor, implement the method of any one of claims 1 to 4.
CN202010906834.4A 2020-09-02 2020-09-02 Method and system for controlling bandwidth Active CN111817902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010906834.4A CN111817902B (en) 2020-09-02 2020-09-02 Method and system for controlling bandwidth

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010906834.4A CN111817902B (en) 2020-09-02 2020-09-02 Method and system for controlling bandwidth

Publications (2)

Publication Number Publication Date
CN111817902A CN111817902A (en) 2020-10-23
CN111817902B true CN111817902B (en) 2021-01-01

Family

ID=72860100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010906834.4A Active CN111817902B (en) 2020-09-02 2020-09-02 Method and system for controlling bandwidth

Country Status (1)

Country Link
CN (1) CN111817902B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1218592A (en) * 1996-03-18 1999-06-02 通用仪器公司 Dynamic bandwidth allocation for communication network
CN103312740A (en) * 2012-03-09 2013-09-18 腾讯科技(深圳)有限公司 Generation method and device of P2P network strategy
CN109257760A (en) * 2018-09-28 2019-01-22 西安交通大学深圳研究院 Customer flow forecasting system in wireless network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738492B2 (en) * 2007-11-19 2010-06-15 Avistar Communications Corporation Network communication bandwidth management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1218592A (en) * 1996-03-18 1999-06-02 通用仪器公司 Dynamic bandwidth allocation for communication network
CN103312740A (en) * 2012-03-09 2013-09-18 腾讯科技(深圳)有限公司 Generation method and device of P2P network strategy
CN109257760A (en) * 2018-09-28 2019-01-22 西安交通大学深圳研究院 Customer flow forecasting system in wireless network

Also Published As

Publication number Publication date
CN111817902A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
KR20190119548A (en) Method and apparatus for processing image noise
CN108734107B (en) Multi-target tracking method and system based on human face
CN111930524B (en) Method and system for distributing computing resources
US7822275B2 (en) Method for detecting water regions in video
US11816875B2 (en) Method for estimating and presenting passenger flow, system, and computer readable storage medium
CN111295689A (en) Depth aware object counting
CN112183166A (en) Method and device for determining training sample and electronic equipment
CN114679607B (en) Video frame rate control method and device, electronic equipment and storage medium
CN110738251A (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP2023536025A (en) Target detection method, device and roadside equipment in road-vehicle cooperation
CN112309068B (en) Forest fire early warning method based on deep learning
CN116664359A (en) Intelligent fire early warning decision system and method based on multi-sensor fusion
CN116258941A (en) Yolox target detection lightweight improvement method based on Android platform
CN113312957A (en) off-Shift identification method, device, equipment and storage medium based on video image
CN108985221A (en) Video clip detection method, device, equipment and storage medium
CN113420871B (en) Image quality evaluation method, image quality evaluation device, storage medium, and electronic device
CN111817902B (en) Method and system for controlling bandwidth
KR20210077901A (en) Apparatus and Method for Obtaining Image
CN112969053B (en) In-vehicle information transmission method and device, vehicle-mounted equipment and storage medium
CN114463608A (en) Task processing method and related equipment
US11330458B2 (en) Systems and methods for detecting an unauthorized airborne device
CN114495160A (en) Pedestrian detection method and system based on improved RFBNet algorithm
CN109558839A (en) Adaptive face identification method and the equipment and system for realizing this method
Tiwari et al. Development of Algorithm for Object Detection & Tracking Using RGB Model
CN115659212B (en) Federal learning efficiency evaluation method based on TDD communication under cross-domain heterogeneous scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 200050 e03-j40, 10th floor, 2299 Yan'an west road, Changning District, Shanghai

Patentee after: Xingrong (Shanghai) Information Technology Co.,Ltd.

Address before: 200050 e03-j40, 10th floor, 2299 Yan'an west road, Changning District, Shanghai

Patentee before: SHANGHAI XINGRONG INFORMATION TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder