CN116170660A - Algorithm scheduling method and device for camera, computer equipment and medium - Google Patents

Algorithm scheduling method and device for camera, computer equipment and medium Download PDF

Info

Publication number
CN116170660A
CN116170660A CN202211710090.4A CN202211710090A CN116170660A CN 116170660 A CN116170660 A CN 116170660A CN 202211710090 A CN202211710090 A CN 202211710090A CN 116170660 A CN116170660 A CN 116170660A
Authority
CN
China
Prior art keywords
algorithm
scheduling
camera
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211710090.4A
Other languages
Chinese (zh)
Inventor
曾卫东
程冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lifei Software Technology Co ltd
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Hangzhou Lifei Software Technology Co ltd
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lifei Software Technology Co ltd, Shenzhen Intellifusion Technologies Co Ltd filed Critical Hangzhou Lifei Software Technology Co ltd
Priority to CN202211710090.4A priority Critical patent/CN116170660A/en
Publication of CN116170660A publication Critical patent/CN116170660A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present invention relates to the field of artificial intelligence technologies, and in particular, to an algorithm scheduling method and apparatus for a camera, a computer device, and a medium. According to the method, basic information and a scheduling scheme of a camera are acquired, an association relation between the basic information and the scheduling scheme is established, training samples are formed by the basic information and the scheduling scheme with the association relation, all training samples are used for training an algorithm scheduling model, target basic information of a target camera is input into the trained algorithm scheduling model to obtain the target scheduling scheme, the algorithm, working time and lens configuration of the target camera are updated according to the target scheduling scheme, the association relation is established with the scheduling scheme of the camera through deployment information and scene information of the camera, the training samples are determined to train the algorithm scheduling model according to the association relation, robustness of an algorithm scheduling process is improved, full automation of the algorithm scheduling process of the camera is achieved, and efficiency and accuracy of algorithm scheduling of the camera are further improved.

Description

Algorithm scheduling method and device for camera, computer equipment and medium
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to an algorithm scheduling method and apparatus for a camera, a computer device, and a medium.
Background
Along with the development of artificial intelligence technology, the popularity of intelligent cameras is higher and higher, various intelligent cameras with different algorithm capabilities are covered in different application scenes, such as public transportation scenes, commercial shopping scenes, medical scenes and the like, the continuous increase of the number of the intelligent cameras also brings the problem that the algorithm scheduling of the cameras is time-consuming and labor-consuming, the operation of the conventional algorithm scheduling mainly adopts a manual scheduling mode, and different algorithms are deployed for different cameras so as to be suitable for the application scenes.
However, for a multifunctional intelligent camera, an algorithm operated by the intelligent camera needs to be adjusted according to real-time scene information to realize different functions, obviously, a manual scheduling mode is adopted, a manual operation flow is complicated, and when a new application scene is faced, a series of problems such as easy configuration errors, repeated debugging and repeated work are caused by manual scheduling, so that how to effectively improve the scheduling efficiency and accuracy of the camera algorithm becomes a problem to be solved urgently.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide an algorithm scheduling method, apparatus, computer device and medium for a camera, so as to solve the problem that the efficiency and accuracy of camera algorithm scheduling are low.
In a first aspect, an embodiment of the present invention provides an algorithm scheduling method for a camera, where the algorithm scheduling method includes:
acquiring basic information of N cameras and a used scheduling scheme, wherein the scheduling scheme comprises operation time, focal length and a scheduling algorithm, and N is an integer larger than zero;
traversing the N cameras, and respectively establishing association relations between basic information of the N cameras and a scheduling scheme to obtain at least one association relation;
forming training samples by basic information with association relation and a scheduling scheme to obtain at least one training sample, and training a preset algorithm scheduling model by using all the training samples to obtain a trained algorithm scheduling model;
inputting the acquired target basic information of the target camera into the trained algorithm scheduling model to predict a scheduling algorithm, so as to obtain a target scheduling scheme corresponding to the target camera;
and configuring an algorithm in the target camera as a scheduling algorithm in the target scheduling scheme, and updating the working time and the lens configuration of the target camera according to the running time and the focal length in the target scheduling scheme.
In a second aspect, an embodiment of the present invention provides an algorithm scheduling apparatus for a camera, where the algorithm scheduling apparatus includes:
the information acquisition module is used for acquiring basic information of N cameras and a used scheduling scheme, wherein the scheduling scheme comprises operation time, focal length and a scheduling algorithm, and N is an integer greater than zero;
the relation establishing module is used for traversing the N cameras, and respectively establishing association relations between the basic information of the N cameras and the scheduling scheme to obtain at least one association relation;
the model training module is used for forming training samples by the basic information with the association relation and the scheduling scheme to obtain at least one training sample, and training a preset algorithm scheduling model by using all the training samples to obtain a trained algorithm scheduling model;
the scheme determining module is used for inputting the acquired target basic information of the target camera into the trained algorithm scheduling model to predict a scheduling algorithm, so as to obtain a target scheduling scheme corresponding to the target camera;
and the algorithm scheduling module is used for configuring an algorithm in the target camera as a scheduling algorithm in the target scheduling scheme, and updating the working time and the lens configuration of the target camera according to the running time and the focal length in the target scheduling scheme.
In a third aspect, an embodiment of the present invention provides a computer device, the computer device including a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the algorithm scheduling method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium storing a computer program, which when executed by a processor implements the algorithm scheduling method according to the first aspect.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
acquiring basic information of N cameras and a scheduling scheme used, wherein the scheduling scheme comprises operation time, focal length and scheduling algorithm, traversing the N cameras, respectively establishing association between the basic information of the N cameras and the scheduling scheme to obtain at least one association, forming training samples by the basic information and the scheduling scheme with the association to obtain at least one training sample, training a preset algorithm scheduling model by using all training samples to obtain a trained algorithm scheduling model, inputting the acquired target basic information of the target cameras into the trained algorithm scheduling model to conduct scheduling algorithm prediction to obtain a target scheduling scheme corresponding to the target cameras, configuring the algorithm in the target cameras into the scheduling algorithm in the target scheduling scheme, updating the working time and the lens configuration of the target cameras according to the operation time and the focal length in the target scheduling scheme, establishing association between the deployment information and the scene information of the cameras and the scheduling scheme of the cameras, determining the training samples according to the association to train the algorithm scheduling model, improving the robustness of the algorithm scheduling process, realizing the scheduling process of the algorithm, and further improving the efficiency and the accuracy of the algorithm scheduling.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application environment of an algorithm scheduling method for a camera according to a first embodiment of the present invention;
fig. 2 is a flowchart of an algorithm scheduling method for a camera according to a first embodiment of the present invention;
fig. 3 is a schematic structural diagram of an algorithm scheduling device for a camera according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to a third embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The embodiment of the invention can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
It should be understood that the sequence numbers of the steps in the following embodiments do not mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
The algorithm scheduling method for the camera provided by the embodiment of the invention can be applied to an application environment as shown in fig. 1, wherein a client communicates with a server. The client includes, but is not limited to, a palm top computer, a desktop computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cloud terminal device, a personal digital assistant (personal digital assistant, PDA), and other computer devices. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
The client and the server can be deployed in cloud environments to apply computer equipment and servers with high computing power and large storage space, the server communicates with at least one camera, the camera can be replaced by image acquisition equipment such as a video camera, a video recorder and handheld photographic equipment, the camera is deployed in specific application scenes, the application scenes can include but are not limited to medical scenes, public transportation scenes, commercial shopping scenes, garbage collection scenes, party management scenes, water service scenes and the like, intelligent algorithms operated by the camera in different application scenes are different, when a new camera is deployed in any application scene, or when an old camera is changed in position, the intelligent algorithm corresponding to the application scene needs to be determined for the camera to be deployed in the camera, intelligent functions corresponding to the application scenes are executed, in addition, the same camera possibly relates to a plurality of application scenes, for example, the camera deployed in a gate of a hospital can shoot both medical scenes and public transportation scenes, the intelligent algorithm operated by the camera relates to the switching algorithm and the like.
Referring to fig. 2, a flowchart of an algorithm scheduling method for a camera according to an embodiment of the present invention is provided, where the algorithm scheduling method for a camera may be applied to a client in fig. 1, a computer device corresponding to the client is connected to a server, the server communicates with at least one camera, the computer device corresponding to the client obtains basic information and a scheduling scheme of the camera from the server, and target basic information of a target camera, where the target basic information of the target camera may be used to determine a target scheduling scheme of the target camera. As shown in fig. 2, the algorithm scheduling method for the camera may include the following steps:
step S201, basic information of N cameras and a used scheduling scheme are obtained.
The base information may refer to installation information, scene information, configuration information and the like of the camera, the installation information may include information of the camera such as height information, pose information and the like of the camera when the camera is installed, the scene information may include scene types, shooting targets, identification areas, geographical position information and the like of the camera, the configuration information may include lens parameters, configured algorithm information and the like of the camera, the scheduling scheme may include operation time, focal length and scheduling algorithm, the operation time may refer to time period information of image acquisition of the camera and processing by adopting the configured algorithm, the focal length may refer to a target focal length of the camera to be adjusted by the current focal length, and the scheduling algorithm may refer to a target algorithm of the camera to be updated.
Specifically, the N cameras may be deployed in a plurality of different application scenarios, and the obtained basic information and the used scheduling scheme may be a scheduling scheme used by the camera at the current time and the current time or a scheduling scheme used by the camera at the historical time and the basic information of the camera at the historical time, so as to provide sufficient information for establishing a subsequent association relationship.
Optionally, the basic information includes installation information, lens information, identification area information, configured algorithm information and geographic location information;
the obtaining basic information of the N cameras comprises the following steps:
for any camera, splicing the installation information, the lens information, the configured algorithm information, the identification area information and the geographic position information of the camera, which are acquired in at least one preset time period, to obtain at least one first splicing result corresponding to the preset time period;
and determining all the first splicing results as basic information of the cameras, traversing N cameras, and obtaining the basic information of the N cameras.
The lens information may refer to lens parameters of the camera, in this embodiment, the altitude information is used as installation information, the definition information is used as lens information, the identification area information may refer to area information covered by a view angle of the camera, the configured algorithm information may include at least one configured algorithm, and the geographic location information may refer to geographic coordinate information where the camera is deployed, in this embodiment, longitude and latitude information is used as geographic location information. The first stitching result may be a representation of the camera base information. The preset time period may refer to several historical time periods and current time periods.
Specifically, if there are M preset time periods, for a single camera, there are M first splicing results, the M first splicing results are used as basic information of the camera, the number of the cameras is N, and then there are N x M first splicing results as basic information, it should be noted that in this embodiment, the basic information of the same camera in different preset time periods is defaulted.
The first splicing result may be represented as a 1*K-sized vector, that is, a row of vectors, i.e., a K column, where each column corresponds to a basic information, and since the present embodiment splices the installation information, the lens information, the configured algorithm information, the identification area information, and the geographic location information of the camera, the value of K is 5.
The camera defaults to a camera with intelligent processing capability, a single camera can store a plurality of configured algorithms at the same time, and for a camera with lower computational power, a single camera can store only one configured algorithm, wherein the configured algorithm means that the camera can directly perform algorithm deployment in the camera so as to perform intelligent task processing.
In the embodiment, the basic information of the camera is expressed in a vector form, so that the basic information can be expressed in a unified form, the subsequent training of an algorithm scheduling model as a part of training samples is facilitated, and the efficiency of the algorithm scheduling process is improved.
Optionally, after obtaining at least one splicing result corresponding to the preset time period, the method further includes:
for any preset time period, acquiring a scene image acquired by a camera in the preset time period, inputting the scene image into a trained scene analysis model for camera information prediction, and obtaining predicted lens information, predicted installation information, predicted identification area information and predicted geographic position information;
comparing the predicted lens information, the predicted installation information, the predicted identification area information and the predicted geographic position information with the lens information, the installation information, the identification area information and the geographic position information of the camera acquired in a preset time period respectively to obtain a comparison result;
if the comparison result meets the preset condition, the first splicing result of the preset time period is used as the basic information of the camera, and all the preset time periods are traversed to obtain the basic information of the camera.
The scene image may be an actual image acquired by the camera in a preset time period, the trained scene analysis model may be used for predicting basic information of the camera, the trained scene analysis model may adopt a trained prediction model, the trained prediction model may include a trained encoder and a plurality of trained full-connection layers, the trained encoder may be used for extracting image features of the scene image, the trained full-connection layers may be used for mapping the image features to feature spaces of the basic information, and different trained full-connection layers correspond to feature spaces of different basic information.
The predicted shot information, the predicted installation information, the predicted identification area information, and the predicted geographic position information may refer to a result of predicting the basic information of the camera from the trained scene analysis model according to the image features of the scene image.
The comparison result may include a plurality of comparison sub-results, each comparison sub-result corresponds to a basic information, the preset condition may refer to a number condition of the comparison sub-results being consistent, that is, comparing the number of statistics of the comparison sub-results being consistent with a preset number threshold, and if the number of statistics of the comparison sub-results being consistent is greater than or equal to the preset number threshold, determining that the comparison result meets the preset condition.
Specifically, the input of the trained scene analysis model may be a fixed number of scene images, the fixed number of scene images may be set to 3, and the size normalization processing is performed on all the scene images to ensure that the input of the trained scene analysis model is the same size.
In this embodiment, the configured algorithm in the basic information cannot predict according to the scene image, so only the lens information, the installation information, the identification area information and the geographical position information are predicted, the lens information may be definition information, and the installation information may be altitude information.
The trained full-connection layer corresponding to the shot information, the trained full-connection layer corresponding to the installation information and the trained full-connection layer corresponding to the identification area information are all full-connection layers of the prediction task, and because the prediction difficulty of the geographic position information is high, in the embodiment, the trained full-connection layer corresponding to the geographic position information is adopted as the full-connection layer of the classification task, and the full-connection layer of the classification task has a fixed number of preset geographic position labels during training.
Comparing the predicted lens information, the predicted installation information, the predicted identification area information and the predicted geographic position information with the lens information, the installation information, the identification area information and the geographic position information of the camera acquired in a preset time period respectively, namely comparing the predicted lens information with the lens information of the camera, comparing the predicted installation information with the installation information of the camera, comparing the predicted identification area information with the identification area information of the camera, and comparing the predicted geographic position information with the geographic position information of the camera to obtain four comparison sub-results, wherein in the embodiment, the quantity threshold is set to be 2, and if the consistent statistical quantity in the four comparison sub-results is more than 2, the comparison result is indicated to meet the preset condition.
And comparing the predicted basic information with the acquired basic information of the camera, so that the situation that the acquired basic information has errors is avoided, and the acquired basic information is reliable enough.
In an embodiment, since the conditions for comparing the sub-results are identical are severe, the comparison may be performed by using a distance measurement method, for example, regarding lens information, the absolute value of the difference between the predicted sharpness information and the obtained sharpness information is used as the comparison sub-result, and it should be noted that normalization processing is required for each distance measurement result, and in this embodiment, a preset normalization function is used for processing.
The preset normalization function can be expressed as:
Figure BDA0004027057900000081
wherein x may represent a distance measurement result, f (x) may represent a normalization result of the distance measurement result, and since the distance measurement results are all values greater than zero, the closer the distance measurement result is to 0, the closer the normalization result is to 1, the more similar the predicted basic information and the acquired basic information are, the closer the distance measurement result is to 1, the closer the normalization result is to 0, the predicted basic information and the acquired basic information are dissimilar.
After normalization processing, each comparison sub-result can be directly added to compare the addition result with a preset value, and when the addition result is larger than or equal to the preset value, the comparison result is indicated to meet the preset condition, and at the moment, the preset value can be the same as the preset quantity threshold value.
In this embodiment, the scene analysis model predicts the basic information, and compares the basic information with the obtained basic information to ensure the reliability of the obtained basic information, and screens out the incorrect or unsuitable basic information, thereby ensuring the accuracy of the subsequent algorithm scheduling model after training.
Optionally, the scheduling scheme further includes an algorithm function and an algorithm type;
the scheduling scheme used for acquiring the N cameras comprises the following steps:
for any camera, splicing the historical running time, the historical focal length, the historical scheduling algorithm, the historical algorithm function and the historical algorithm type of the camera, which are acquired in at least one preset time period, to obtain at least one second splicing result corresponding to the preset time period;
and determining all second splicing results as scheduling schemes of the cameras, and traversing N cameras to obtain at least one scheduling scheme.
The algorithm function may refer to an intelligent processing task to which the algorithm is applied, the algorithm type may refer to a basic task of the algorithm, and the basic task may include a classification task, a prediction task, and the like.
The historical runtime, the historical focus, the historical scheduling algorithm, the historical algorithm function, and the historical algorithm type may refer to a scheduling scheme used by the camera during a historical preset time period, and the second splice result may refer to a representation of the scheduling scheme.
Specifically, the intelligent processing task may be a mask detection task, a crowd gathering task, etc. in a medical scene, a red light running detection task, an overspeed identification task, a traffic violation parking task, etc. in a public transportation scene, a crowd gathering task, an unattended operation task, etc. in a commercial shopping scene, a garbage overflow detection task, a garbage exposure detection task, etc. in a garbage collection scene, a stranger identification task, etc. in a party and administration scene, a water level detection task, a drift detection task, etc. in a water service scene.
The second splicing result may be represented as a 1*L-sized vector, that is, a row of L columns, where each column corresponds to the content in a scheduling scheme, and since the present embodiment splices the runtime, the focal length, the scheduling algorithm, the algorithm function, and the algorithm type, the value of L is also 5.
If there are M preset time periods, for a single camera, there are M second splicing results, the M second splicing results are used as a scheduling scheme of the camera, and the number of the cameras is N, and then there are N x M second splicing results as scheduling schemes, which should be noted that in this embodiment, the same camera is defaulted to have different scheduling schemes in different preset time periods.
The implementer can set a specific scheduling scheme for the camera according to actual conditions, and the manually set scheduling scheme can not be the scheduling scheme actually adopted by the camera, so that the flexibility of determining the follow-up scheduling scheme is improved.
In the embodiment, the scheduling scheme of the camera is expressed in a vector form, so that the scheduling scheme can be expressed in a unified form, the subsequent training of an algorithm scheduling model as a part of training samples is facilitated, and the efficiency of the algorithm scheduling process is improved.
The step of acquiring the basic information of the N cameras and the used scheduling scheme provides real information for subsequent establishment of the basic information and the scheduling scheme, so that generalization of the associated information is high enough, and the accuracy of intelligent scheduling of a subsequent algorithm is improved.
Step S202, traversing N cameras, and respectively establishing association relations between basic information of the N cameras and a scheduling scheme to obtain at least one association relation.
The association relationship may refer to a relationship between basic information and a scheduling scheme, and the association relationship may be used to determine the scheduling scheme conforming to the basic information of the camera.
Specifically, the association relationship between the height information and the scheduling algorithm may be established according to the single information in the basic information corresponding to the single scheme in the scheduling schemes, for example, for the installation information of the camera, for the camera height of 2 to 3 meters, the associated scheduling algorithm is a small object recognition algorithm, such as a paper dust recognition algorithm, a garbage recognition algorithm, a smoke recognition algorithm, a mask recognition algorithm, a helmet recognition algorithm, etc., for the camera height of 3 to 5 meters, the associated scheduling algorithm may be a medium object, a personnel related recognition algorithm, such as a human body detection algorithm, an advertisement banner recognition algorithm, etc., and for the camera height of more than 5 meters, the associated scheduling algorithm may be a large object recognition algorithm, such as a vehicle recognition algorithm, a construction recognition algorithm, a river bank detection algorithm.
Since the association relationship may be that a single information of the base information corresponds to a single scheme of the scheduling schemes, a base information may correspond to a plurality of scheduling schemes, and likewise, a scheduling scheme may correspond to a plurality of base information.
Optionally, traversing the N cameras, and respectively establishing association relations between basic information of the N cameras and the scheduling scheme to obtain at least one association relation, including:
for any one of the N cameras, associating basic information of the camera in any preset time period with a scheduling scheme to obtain an association relationship corresponding to the camera in the preset time period, traversing the M preset time periods, and establishing the association relationship of the camera in the corresponding M preset time periods;
selecting cameras from the N cameras, and continuously establishing association relations of the selected cameras in the corresponding M preset time periods until the N cameras are traversed, so as to respectively establish the association relations of the N cameras in the corresponding M preset time periods, and obtain at least one association relation.
The number of the preset time periods is M, M is an integer larger than zero, each single camera has M incidence relations, and N cameras have N incidence relations.
Specifically, the n×m association relationships are only association relationships between basic information of the camera and the actually used scheduling schemes, and since the scheduling schemes can be manually configured, the association relationships are sufficiently abundant in practice.
In this embodiment, a sufficient number of association relations are established to ensure that enough training samples are provided for the training process in the following process, ensure the fitting effect of the algorithm scheduling model, and improve generalization and accuracy of the following algorithm scheduling model after training.
And traversing the N cameras, respectively establishing the association relation between the basic information of the N cameras and the scheduling scheme, and obtaining at least one association relation, so that the subsequent construction of training samples according to the association relation is facilitated, the algorithm scheduling model is trained, enough training samples are provided for the training process, and the accuracy of the trained algorithm scheduling model is ensured.
Step S203, a training sample is formed by the basic information with the association relation and the scheduling scheme, at least one training sample is obtained, and a preset algorithm scheduling model is trained by using all the training samples, so that a trained algorithm scheduling model is obtained.
The training samples can be input when the algorithm scheduling model is trained, the trained algorithm scheduling model can predict a scheduling scheme for the acquired basic information of the unknown camera, and a training sample is formed by the basic information with the association relationship and the scheduling scheme.
Specifically, when algorithm scheduling is needed to be carried out on the camera each time, basic information and a scheduling scheme of the camera are required to be collected, so that higher accuracy is ensured each time the algorithm scheduling process is carried out, the generalization capability of an algorithm model is better and better, and the intelligent algorithm scheduling is more accurate and is close to a use scene.
Optionally, training a preset algorithm scheduling model by using all training samples to obtain a trained algorithm scheduling model, including:
inputting basic information in any training sample into a preset algorithm scheduling model to obtain a predicted sample scheme;
according to the sample scheme, the scheduling scheme in the training sample and a preset prediction loss function, calculating to obtain the predictor loss of the corresponding training sample;
traversing all training samples to obtain the predicted sub-loss of the corresponding training samples, adding all the predicted sub-losses to obtain the predicted loss, training the algorithm scheduling model by adopting a gradient descent method based on the predicted loss until the predicted loss converges to meet the preset condition, and obtaining the trained algorithm scheduling model.
The sample scheme may refer to a result of predicting the training samples by a preset algorithm scheduling model, the prediction loss function may adopt a mean square error loss function, the predictor loss may refer to a loss of a corresponding training sample, the prediction loss may refer to a total loss of all training samples, and the gradient descent method may adopt a random gradient descent method, a batch gradient descent method, and the like.
In this embodiment, the preset condition may mean that the prediction loss is not reduced under the preset number of training batches, the preset number of training batches may be set to 5, and the prediction loss is calculated through the prediction result and the input training sample, so that the algorithm scheduling model is trained, so that the trained algorithm scheduling model can learn the association relationship between the basic information and the scheduling scheme, thereby providing a more accurate scheduling scheme for the algorithm scheduling task.
The training samples are formed by the basic information with the association relation and the scheduling scheme, at least one training sample is obtained, all the training samples are used for training the preset algorithm scheduling model, and the step of obtaining the trained algorithm scheduling model comprises the steps of constructing enough training samples through enough association relations, training the preset algorithm scheduling model, and the generalization and the accuracy of the trained algorithm scheduling model can be effectively improved.
And step S204, inputting the acquired target basic information of the target camera into a trained algorithm scheduling model to predict a scheduling algorithm, and obtaining a target scheduling scheme corresponding to the target camera.
The target camera may refer to a camera that needs to perform algorithm scheduling, the target basic information may refer to basic information of the target camera, and the target scheduling scheme may refer to a scheduling scheme applied to the target camera and output by a trained algorithm scheduling model.
Optionally, after obtaining the target scheduling scheme of the corresponding target camera, the method further includes:
constructing a target sample by using the target basic information and the target scheduling scheme;
and training the algorithm scheduling model again by adopting all training samples and target samples to obtain an updated algorithm scheduling model, wherein the updated algorithm scheduling model is used for carrying out algorithm scheduling prediction according to the updated target basic information.
The target sample may be a training sample used for updating an algorithm scheduling model, the updated algorithm scheduling model is used for performing algorithm scheduling prediction according to updated target basic information, and the updated target basic information may be basic information of a new camera acquired later.
Specifically, the retraining of the algorithm scheduling model is required to be performed every time the algorithm scheduling is performed, so that in the actual use process, an implementer can perform batch algorithm scheduling processing after receiving target basic information of a preset number of target cameras.
And for the target basic information which is subjected to algorithm scheduling processing in the same batch, after the target basic information of one target camera is subjected to algorithm scheduling, a target sample can be formed by the target basic information and a corresponding target scheduling scheme, and the algorithm scheduling model is trained again to obtain an updated algorithm scheduling model so as to improve the accuracy of the algorithm scheduling process of other target basic information in the batch.
It should be noted that, when the target sample is obtained, the implementer may adjust the target sample according to the actual situation, so that the association relationship between the target basic information corresponding to the target sample and the target scheduling scheme better accords with the experience of the implementer.
In this embodiment, the target sample is formed by the target basic information and the target scheduling scheme to retrain the algorithm scheduling model, so that the generalization of the algorithm scheduling model is gradually increased along with the processing process during batch processing, and the accuracy of algorithm scheduling is improved.
The step of inputting the obtained target basic information of the target camera into the trained algorithm scheduling model to predict the scheduling algorithm to obtain the target scheduling scheme corresponding to the target camera can train the algorithm scheduling model in real time, so that the trained algorithm scheduling model is more robust, and the scheduling method obtained by intelligent algorithm scheduling can be more accurate and close to a use scene.
Step S205, the algorithm in the target camera is configured as a scheduling algorithm in the target scheduling scheme, and the working time and the lens configuration of the target camera are updated according to the running time and the focal length in the target scheduling scheme.
The algorithm in the target camera may refer to an algorithm configured in the target camera at the current moment, and the working time and the lens configuration of the target camera may refer to a working time period and lens parameters of the algorithm configured in the target camera at the current moment.
Specifically, if the target camera has stored the scheduling algorithm in the target scheduling scheme, the scheduling algorithm is directly deployed in the target camera, if the target camera has not stored the scheduling algorithm in the target scheduling scheme, the model parameters are sent to the target camera through the server, and after the target camera receives the model parameters, the scheduling algorithm is deployed in the target camera.
For example, the identification area in the target basic information of the target camera is a parking violation area, the stored intelligent algorithm comprises a motor vehicle parking violation identification algorithm, and if the target basic information is input into a trained algorithm scheduling model to predict the scheduling algorithm, the scheduling algorithm in the obtained target scheduling scheme is the motor vehicle parking violation identification algorithm, the motor vehicle parking violation identification algorithm is configured in the target camera.
And updating the working time and the lens configuration of the target camera according to the working time and the focal length in the target scheduling scheme, wherein the working time of the target camera is from the morning of eight points to the evening of eight points, and the focal length of the lens in the target scheduling scheme is 75mm, setting the working time of the target camera for starting the scheduled algorithm as the working time, namely from the morning of eight points to the evening of eight points, and adjusting the focal length of the lens when the target camera starts the scheduled algorithm to be 75mm.
According to the method, the algorithm in the target camera is configured as the scheduling algorithm in the target scheduling scheme, and the working time and the lens configuration of the target camera are updated according to the running time and the focal length in the target scheduling scheme, so that the workload when a large number of cameras are configured for scheduling can be reduced, the problems of complicated configuration, easiness in error, difficulty in handling and the like can be effectively avoided, and the scheduling efficiency of the algorithm is improved.
In the embodiment, the association relation is established with the scheduling scheme of the camera through the deployment information and the scene information of the camera, the training sample is determined according to the association relation to train the algorithm scheduling model, the robustness of the algorithm scheduling process is improved, the full automation of the camera algorithm scheduling process is realized, and the efficiency and the accuracy of the camera algorithm scheduling are further improved.
Fig. 3 shows a block diagram of an algorithm scheduling device for a camera according to a second embodiment of the present invention, where the algorithm scheduling device for a camera is applied to a client, a computer device corresponding to the client is connected to a server, the server communicates with at least one camera, the computer device corresponding to the client obtains basic information and a scheduling scheme of the camera from the server, and target basic information of a target camera, where the target basic information of the target camera may be used to determine a target scheduling scheme of the target camera. For convenience of explanation, only portions relevant to the embodiments of the present invention are shown.
Referring to fig. 3, the algorithm scheduling apparatus for a camera includes:
the information obtaining module 31 is configured to obtain basic information of N cameras and a scheduling scheme used, where the scheduling scheme includes a running time, a focal length, and a scheduling algorithm, and N is an integer greater than zero;
the relationship establishing module 32 is configured to traverse the N cameras, and respectively establish association relationships between the basic information of the N cameras and the scheduling scheme, so as to obtain at least one association relationship;
the model training module 33 is configured to form training samples from the basic information and the scheduling schemes with association relationships to obtain at least one training sample, and train a preset algorithm scheduling model by using all the training samples to obtain a trained algorithm scheduling model;
the scheme determining module 34 is configured to input the obtained target basic information of the target camera into a trained algorithm scheduling model to perform prediction of a scheduling algorithm, so as to obtain a target scheduling scheme corresponding to the target camera;
the algorithm scheduling module 35 is configured to configure an algorithm in the target camera as a scheduling algorithm in the target scheduling scheme, and update the working time and the lens configuration of the target camera according to the running time and the focal length in the target scheduling scheme.
Optionally, the basic information includes installation information, lens information, identification area information, configured algorithm information and geographic location information;
the information acquisition module 31 includes:
the first information splicing sub-module is used for splicing the installation information, the lens information, the configured algorithm information, the identification area information and the geographic position information of the cameras, which are acquired in at least one preset time period, aiming at any camera to obtain at least one first splicing result corresponding to the preset time period;
the information determination submodule is used for determining all first splicing results to serve as basic information of the cameras, traversing the N cameras and obtaining the basic information of the N cameras.
Optionally, the information obtaining module 31 further includes:
the information prediction sub-module is used for acquiring scene images acquired by the cameras in a preset time period, inputting the scene images into a trained scene analysis model for camera information prediction to obtain predicted lens information, predicted installation information, predicted identification area information and predicted geographic position information;
the information comparison sub-module is used for respectively comparing the predicted lens information, the predicted installation information, the predicted identification area information and the predicted geographic position information with the lens information, the installation information, the identification area information and the geographic position information of the camera, which are acquired in a preset time period, so as to obtain a comparison result;
And the condition judging sub-module is used for taking the first splicing result of the preset time period as the basic information of the camera if the comparison result meets the preset condition, and traversing all the preset time periods to obtain the basic information of the camera.
Optionally, the scheduling scheme further includes an algorithm function and an algorithm type;
the information acquisition module 31 further includes:
the second information splicing sub-module is used for splicing the historical running time, the historical focal length, the historical scheduling algorithm, the historical algorithm function and the historical algorithm type of the cameras, which are acquired in at least one preset time period, aiming at any camera to obtain at least one second splicing result corresponding to the preset time period;
the scheme determining submodule is used for determining all second splicing results as scheduling schemes of the cameras, traversing N cameras and obtaining at least one scheduling scheme.
Optionally, the relationship establishing module 32 includes:
the time period traversing sub-module is used for associating basic information of the cameras in any preset time period with a scheduling scheme aiming at any one of the N cameras to obtain an association relationship of the cameras in the preset time period, traversing M preset time periods, and establishing the association relationship of the cameras in the corresponding M preset time periods, wherein M is an integer larger than zero;
The camera traversing sub-module is used for selecting the cameras from the N cameras, and continuously establishing the association relation of the selected cameras in the corresponding M preset time periods until the N cameras are traversed, so as to respectively establish the association relation of the N cameras in the corresponding M preset time periods and obtain at least one association relation.
Optionally, the model training module 33 includes:
the scheme prediction sub-module is used for inputting basic information in any training sample into a preset algorithm scheduling model to obtain a predicted sample scheme;
the loss calculation sub-module is used for calculating and obtaining the predictor loss of the corresponding training sample according to the sample scheme, the scheduling scheme in the training sample and the preset prediction loss function;
and the iterative training sub-module is used for traversing all training samples to obtain the predicted sub-loss of the corresponding training samples, adding all the predicted sub-losses to obtain the predicted loss, and training the algorithm scheduling model by adopting a gradient descent method based on the predicted loss until the predicted loss converges to meet the preset condition to obtain the trained algorithm scheduling model.
Optionally, the algorithm scheduling device for a camera further includes:
The sample construction module is used for constructing a target sample by the target basic information and the target scheduling scheme;
the model updating module is used for retraining the algorithm scheduling model by adopting all training samples and target samples to obtain an updated algorithm scheduling model, and the updated algorithm scheduling model is used for carrying out algorithm scheduling prediction according to the updated target basic information.
It should be noted that, because the content of information interaction and execution process between the modules and the sub-modules is based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof may be found in the method embodiment section, and details are not repeated here.
Fig. 4 is a schematic structural diagram of a computer device according to a third embodiment of the present invention. As shown in fig. 4, the computer device of this embodiment includes: at least one processor (only one shown in fig. 4), a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed, performs the steps of any of the various algorithm scheduling method embodiments described above for a camera.
The computer device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a computer device and is not intended to limit the computer device, and that a computer device may include more or fewer components than shown, or may combine certain components, or different components, such as may also include a network interface, a display screen, an input device, and the like.
The processor may be a CPU, but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory includes a readable storage medium, an internal memory, etc., where the internal memory may be the memory of the computer device, the internal memory providing an environment for the execution of an operating system and computer-readable instructions in the readable storage medium. The readable storage medium may be a hard disk of a computer device, and in other embodiments may be an external storage device of the computer device, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. that are provided on the computer device. Further, the memory may also include both internal storage units and external storage devices of the computer device. The memory is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs such as program codes of computer programs, and the like. The memory may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above-described embodiment, and may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiment described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The present invention may also be implemented as a computer program product for implementing all or part of the steps of the method embodiments described above, when the computer program product is run on a computer device, causing the computer device to execute the steps of the method embodiments described above.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. An algorithm scheduling method for a camera, which is characterized by comprising the following steps:
acquiring basic information of N cameras and a used scheduling scheme, wherein the scheduling scheme comprises operation time, focal length and a scheduling algorithm, and N is an integer larger than zero;
Traversing the N cameras, and respectively establishing association relations between basic information of the N cameras and a scheduling scheme to obtain at least one association relation;
forming training samples by basic information with association relation and a scheduling scheme to obtain at least one training sample, and training a preset algorithm scheduling model by using all the training samples to obtain a trained algorithm scheduling model;
inputting the acquired target basic information of the target camera into the trained algorithm scheduling model to predict a scheduling algorithm, so as to obtain a target scheduling scheme corresponding to the target camera;
and configuring an algorithm in the target camera as a scheduling algorithm in the target scheduling scheme, and updating the working time and the lens configuration of the target camera according to the running time and the focal length in the target scheduling scheme.
2. The algorithm scheduling method according to claim 1, wherein the base information includes installation information, shot information, identification area information, configured algorithm information, and geographical location information;
the obtaining basic information of the N cameras comprises the following steps:
for any camera, splicing the installation information, the lens information, the configured algorithm information, the identification area information and the geographic position information of the camera, which are acquired in at least one preset time period, to obtain at least one first splicing result corresponding to the preset time period;
And determining all first splicing results as basic information of the cameras, traversing the N cameras, and obtaining the basic information of the N cameras.
3. The algorithm scheduling method according to claim 2, further comprising, after the obtaining at least one splice result corresponding to the preset time period:
for any preset time period, acquiring a scene image acquired by the camera in the preset time period, inputting the scene image into a trained scene analysis model for camera information prediction, and obtaining predicted lens information, predicted installation information, predicted identification area information and predicted geographic position information;
comparing the predicted lens information, the predicted installation information, the predicted identification area information and the predicted geographic position information with the lens information, the installation information, the identification area information and the geographic position information of the camera acquired in the preset time period respectively to obtain a comparison result;
if the comparison result meets the preset condition, the first splicing result of the preset time period is used as the basic information of the camera, and all the preset time periods are traversed to obtain the basic information of the camera.
4. The algorithm scheduling method according to claim 2, wherein the scheduling scheme further comprises an algorithm function and an algorithm type;
the scheduling scheme used for acquiring the N cameras comprises the following steps:
for any camera, splicing the historical running time, the historical focal length, the historical scheduling algorithm, the historical algorithm function and the historical algorithm type of the camera, which are acquired in the at least one preset time period, to obtain at least one second splicing result corresponding to the preset time period;
and determining all second splicing results as a scheduling scheme of the cameras, and traversing the N cameras to obtain at least one scheduling scheme.
5. The algorithm scheduling method according to claim 4, wherein traversing the N cameras respectively establishes association relationships between basic information of the N cameras and a scheduling scheme to obtain at least one association relationship, and includes:
for any one of the N cameras, associating basic information of the camera in any preset time period with a scheduling scheme to obtain an association relationship corresponding to the camera in the preset time period, traversing M preset time periods, and establishing the association relationship of the camera in the corresponding M preset time periods, wherein M is an integer larger than zero;
Selecting cameras from the N cameras, and continuing to establish association relations of the selected cameras in the corresponding M preset time periods until the N cameras are traversed, so as to respectively establish association relations of the N cameras in the corresponding M preset time periods, and obtain at least one association relation.
6. The algorithm scheduling method according to claim 1, wherein training the preset algorithm scheduling model using all training samples to obtain a trained algorithm scheduling model comprises:
inputting basic information in any training sample into the preset algorithm scheduling model to obtain a predicted sample scheme;
according to the sample scheme, the scheduling scheme in the training sample and a preset prediction loss function, calculating to obtain a predictor loss corresponding to the training sample;
traversing all training samples to obtain predictor losses of corresponding training samples, adding all predictor losses to obtain a predictor loss, training the algorithm scheduling model by adopting a gradient descent method based on the predictor losses until the predictor losses are converged to meet preset conditions, and obtaining the trained algorithm scheduling model.
7. The algorithm scheduling method according to any one of claims 1 to 6, further comprising, after the obtaining the target scheduling scheme corresponding to the target camera:
forming a target sample by the target basic information and the target scheduling scheme;
and retraining the algorithm scheduling model by adopting all training samples and the target samples to obtain an updated algorithm scheduling model, wherein the updated algorithm scheduling model is used for carrying out algorithm scheduling prediction according to the updated target basic information.
8. An algorithmic scheduling device for cameras, comprising:
the information acquisition module is used for acquiring basic information of N cameras and a used scheduling scheme, wherein the scheduling scheme comprises operation time, focal length and a scheduling algorithm, and N is an integer greater than zero;
the relation establishing module is used for traversing the N cameras, and respectively establishing association relations between the basic information of the N cameras and the scheduling scheme to obtain at least one association relation;
the model training module is used for forming training samples by the basic information with the association relation and the scheduling scheme to obtain at least one training sample, and training a preset algorithm scheduling model by using all the training samples to obtain a trained algorithm scheduling model;
The scheme determining module is used for inputting the acquired target basic information of the target camera into the trained algorithm scheduling model to predict a scheduling algorithm, so as to obtain a target scheduling scheme corresponding to the target camera;
and the algorithm scheduling module is used for configuring an algorithm in the target camera as a scheduling algorithm in the target scheduling scheme, and updating the working time and the lens configuration of the target camera according to the running time and the focal length in the target scheduling scheme.
9. A computer device, characterized in that it comprises a processor, a memory and a computer program stored in the memory and executable on the processor, which processor implements the algorithm scheduling method according to any one of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the algorithm scheduling method of any one of claims 1 to 7.
CN202211710090.4A 2022-12-29 2022-12-29 Algorithm scheduling method and device for camera, computer equipment and medium Pending CN116170660A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211710090.4A CN116170660A (en) 2022-12-29 2022-12-29 Algorithm scheduling method and device for camera, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211710090.4A CN116170660A (en) 2022-12-29 2022-12-29 Algorithm scheduling method and device for camera, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN116170660A true CN116170660A (en) 2023-05-26

Family

ID=86410545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211710090.4A Pending CN116170660A (en) 2022-12-29 2022-12-29 Algorithm scheduling method and device for camera, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN116170660A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687772A (en) * 2023-07-31 2024-03-12 荣耀终端有限公司 Algorithm scheduling method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060020568A1 (en) * 2004-07-26 2006-01-26 Charles River Analytics, Inc. Modeless user interface incorporating automatic updates for developing and using bayesian belief networks
US10270962B1 (en) * 2017-12-13 2019-04-23 North Of You Llc Automatic camera settings configuration for image capture
US20210093968A1 (en) * 2019-09-26 2021-04-01 Sony Interactive Entertainment Inc. Artificial intelligence (ai) controlled camera perspective generator and ai broadcaster
CN114610471A (en) * 2022-04-18 2022-06-10 深圳奇迹智慧网络有限公司 AI scheduling method and system of intelligent rod

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060020568A1 (en) * 2004-07-26 2006-01-26 Charles River Analytics, Inc. Modeless user interface incorporating automatic updates for developing and using bayesian belief networks
US10270962B1 (en) * 2017-12-13 2019-04-23 North Of You Llc Automatic camera settings configuration for image capture
US20210093968A1 (en) * 2019-09-26 2021-04-01 Sony Interactive Entertainment Inc. Artificial intelligence (ai) controlled camera perspective generator and ai broadcaster
CN114610471A (en) * 2022-04-18 2022-06-10 深圳奇迹智慧网络有限公司 AI scheduling method and system of intelligent rod

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAPIN, Y (BAPIN, YERZHIGIT): "Camera-Driven Probabilistic Algorithm for Multi-Elevator Systems", ENERGIES, 7 January 2021 (2021-01-07) *
董文会: "多摄像机监控网络中的目标连续跟踪方法研究", 中国优秀硕士毕业论文, 15 January 2016 (2016-01-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117687772A (en) * 2023-07-31 2024-03-12 荣耀终端有限公司 Algorithm scheduling method and electronic equipment

Similar Documents

Publication Publication Date Title
US20180300549A1 (en) Road detecting method and apparatus
US20180372914A1 (en) Local weather forecast
CN109583345B (en) Road recognition method, device, computer device and computer readable storage medium
CN109325429A (en) A kind of method, apparatus, storage medium and the terminal of linked character data
CN113012215A (en) Method, system and equipment for space positioning
CN116363538B (en) Bridge detection method and system based on unmanned aerial vehicle
CN115953643A (en) Knowledge distillation-based model training method and device and electronic equipment
WO2023061082A1 (en) Image security processing method and apparatus, electronic device, and storage medium
CN116170660A (en) Algorithm scheduling method and device for camera, computer equipment and medium
CN113569657A (en) Pedestrian re-identification method, device, equipment and storage medium
CN115063589A (en) Knowledge distillation-based vehicle component segmentation method and related equipment
US20240249429A1 (en) Identifier positioning method and apparatus, electronic device and computer-readable storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN114565803A (en) Method, device and mechanical equipment for extracting difficult sample
CN109241893B (en) Road selection method and device based on artificial intelligence technology and readable storage medium
CN110399868B (en) Coastal wetland bird detection method
CN114359352A (en) Image processing method, apparatus, device, storage medium, and computer program product
CN112633114B (en) Unmanned aerial vehicle inspection intelligent early warning method and device for building change event
CN113780172A (en) Pedestrian re-identification method, device, equipment and storage medium
CN114677575A (en) Scene migration method and device and electronic equipment
CN114639076A (en) Target object detection method, target object detection device, storage medium, and electronic device
CN116994068A (en) Target detection method and device based on knowledge distillation
CN112329550A (en) Weak supervision learning-based disaster-stricken building rapid positioning evaluation method and device
CN116861262A (en) Perception model training method and device, electronic equipment and storage medium
US10823881B2 (en) Cloud forecast using sequential images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination