CN108803328B - Camera self-adaptive adjusting method and device and camera - Google Patents

Camera self-adaptive adjusting method and device and camera Download PDF

Info

Publication number
CN108803328B
CN108803328B CN201810614204.2A CN201810614204A CN108803328B CN 108803328 B CN108803328 B CN 108803328B CN 201810614204 A CN201810614204 A CN 201810614204A CN 108803328 B CN108803328 B CN 108803328B
Authority
CN
China
Prior art keywords
reinforcement learning
camera
adjustment
state information
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810614204.2A
Other languages
Chinese (zh)
Other versions
CN108803328A (en
Inventor
姚佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huihe Technology Development Co ltd
Original Assignee
Guangdong Huihe Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huihe Technology Development Co ltd filed Critical Guangdong Huihe Technology Development Co ltd
Priority to CN201810614204.2A priority Critical patent/CN108803328B/en
Publication of CN108803328A publication Critical patent/CN108803328A/en
Application granted granted Critical
Publication of CN108803328B publication Critical patent/CN108803328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a camera self-adaptive adjusting method, a device, a camera and a computer storage medium, wherein the camera self-adaptive adjusting method comprises the following steps: inputting a plurality of determined core state information and corresponding adjustment actions of the initial strategy into a pre-established reinforcement learning model to generate reinforcement learning parameters; and during automatic adjustment, acquiring an optimal adjustment action by using the current core state information, the reinforcement learning model and the reinforcement learning parameters. The self-adaptive adjusting method of the camera enables the camera to autonomously judge the trend of environmental change to make corresponding adjustment through the reinforcement learning model and the reinforcement learning parameters, and reduces the probability of abnormity and failure.

Description

Camera self-adaptive adjusting method and device and camera
Technical Field
The invention relates to the technical field of intelligent security, in particular to a camera self-adaptive adjusting method and device, a camera and a computer storage medium.
Background
With the continuous development and popularization of security technologies, surveillance cameras have spread all over every corner of a city. However, the monitoring camera located at the front end of the security protection is often the weakest link in the whole video monitoring system, the camera is easily affected by weather and environment, and once abnormity or failure occurs, the security protection monitoring of related areas is just like a nominal one.
In the prior art, to the influence of weather and environment, can set up various auxiliary hardware in the camera, for example be provided with the fan in the camera for the heat dissipation avoids the weather overheated unusual or the trouble that causes the camera. However, the algorithm or application program controlling the various auxiliary hardware in the camera is based on a simple strategy, for example, the fan enters a normal operation state only when the temperature exceeds a certain preset temperature. The adjustment strategy is too simple, the application scenes of various cameras cannot be met, the camera cannot be adjusted correspondingly according to the trend of environmental change, and the probability of camera abnormity and fault cannot be reduced well.
Disclosure of Invention
In view of the above problems, the present invention provides a camera adaptive adjustment method, device, camera and computer storage medium, so that the camera autonomously determines the trend of the environmental change to make corresponding adjustment, and the probability of abnormality and failure is reduced.
In order to achieve the purpose, the invention adopts the following technical scheme:
a camera adaptive adjustment method comprises the following steps:
inputting a plurality of determined core state information and corresponding adjustment actions of the initial strategy into a pre-established reinforcement learning model to generate reinforcement learning parameters;
and during automatic adjustment, acquiring an optimal adjustment action by using the current core state information, the reinforcement learning model and the reinforcement learning parameters.
Preferably, the reinforcement learning model is an on-policy linear sarsa reinforcement learning model.
Preferably, the action value function fitted by the reinforcement learning model is as follows:
Figure BDA0001696296350000021
the above formula S is the core state information, A is the adjustment action, w is the reinforcement learning parameter, x is the learning sample, that is
Figure BDA0001696296350000022
Figure BDA0001696296350000023
The gradient of the parameter w is emphasized.
Preferably, the objective function of the reinforcement learning parameter w of the reinforcement learning model is:
Figure BDA0001696296350000024
above formula qπ(S, A) is a true action value function based on the initial strategy, using sarsa (lambda) algorithm
Figure BDA0001696296350000025
Estimating:
Figure BDA0001696296350000026
the above formula Rt+nThe gain brought by the adjusting action at the t + n moment, wherein lambda is a hyper-parameter preset by sarsa (lambda) algorithm, gamma is a hyper-parameter preset by reinforcement learning, and Q (S)t+n) Is composed of
Figure BDA0001696296350000027
And pi is the initial strategy.
Preferably, the camera adaptive adjustment method further includes:
and updating and optimizing the reinforcement learning parameters according to the optimal adjustment action and the current core state information.
Preferably, the formula for updating and optimizing the reinforcement learning parameters is as follows:
Figure BDA0001696296350000031
the upper type
Figure BDA0001696296350000032
To emphasize the gradient of the parameter w.
The invention also provides a camera self-adaptive adjusting device, which comprises:
the parameter generation module is used for inputting the plurality of determined core state information and the adjustment actions of the corresponding initial strategies into a pre-established reinforcement learning model to generate reinforcement learning parameters;
and the automatic adjustment module is used for acquiring an optimal adjustment action by utilizing the current core state information, the reinforcement learning model and the reinforcement learning parameters during automatic adjustment.
Preferably, the camera adaptive adjustment device further includes:
and the parameter updating module is used for updating and optimizing the reinforcement learning parameters according to the optimal adjusting action and the current core state information.
The invention also provides a camera, which comprises a memory and a processor, wherein the memory is used for storing the computer program, and the processor runs the computer program to enable the camera to execute the camera self-adaptive adjusting method.
The invention also provides a computer storage medium storing a computer program for use in the camera.
The invention provides a self-adaptive adjusting method of a camera, which comprises the following steps: inputting a plurality of determined core state information and corresponding adjustment actions of the initial strategy into a pre-established reinforcement learning model to generate reinforcement learning parameters; and during automatic adjustment, acquiring an optimal adjustment action by using the current core state information, the reinforcement learning model and the reinforcement learning parameters. The self-adaptive adjusting method of the camera enables the camera to autonomously judge the trend of environmental change to make corresponding adjustment through the reinforcement learning model and the reinforcement learning parameters, and reduces the probability of abnormity and failure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention.
Fig. 1 is a camera provided in an embodiment of the present invention;
fig. 2 is a flowchart of a camera adaptive adjustment method provided in embodiment 1 of the present invention;
fig. 3 is a flowchart of a camera adaptive adjustment method provided in embodiment 2 of the present invention;
fig. 4 is a structural diagram of a camera adaptive adjustment device provided in embodiment 3 of the present invention;
fig. 5 is a structural diagram of another adaptive camera adjustment device provided in embodiment 3 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The following embodiments can be applied to a camera head as shown in fig. 1, fig. 1 shows a block diagram of the camera head, and the camera head 100 includes: ethernet interface 110, memory 120, sensor 130, audio circuitry 140, wireless fidelity (WiFi) module 150, processor 160, and power supply 170. Those skilled in the art will appreciate that the configuration of camera head 100 shown in fig. 1 is not intended to be limiting and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
Example 1
Fig. 2 is a flowchart of a camera adaptive adjustment method provided in embodiment 1 of the present invention, where the method includes the following steps:
step S21: and inputting the determined core state information and the corresponding adjustment action of the initial strategy into a pre-established reinforcement learning model to generate reinforcement learning parameters.
In the embodiment of the invention, the core state information of the camera comprises temperature, humidity, fan working state, heater working state and the like. The acquired temperature information comprises core temperature information of the camera and temperature information of each key component; the acquired humidity information comprises core humidity information of the camera and humidity information of each key component; the acquired working state of the fan comprises the working power and gear of the fan, and the fan can be divided into 5 gears according to the power; the acquired working state of the heater comprises the power and the gear of the working operation of the heater, and the heater can be divided into 5 gears according to the power.
The core state information of this camera can use the sensor to gather, for example, can use the core humidity of a plurality of humidity transducer collection cameras and the humidity of each key subassembly to can set for a plurality of humidity threshold values, when humidity was in one of them humidity threshold value, then can start corresponding dehumidification tactics and carry out the adjustment of heater and fan. The camera may detect various core state information in real time by using an algorithm or an application program, for example, process information of each sensor in real time by using the application program, monitor various core state information in real time, and record and store each core state information before adjustment, each core state information before adjustment and each core state information after adjustment as a training sample for reinforcement learning each time adjustment of the heater or the fan is performed.
In the embodiment of the present invention, the initial policy refers to an original countermeasure for a core state of the camera, which is preset by a maintenance worker or before the camera leaves a factory, and is in a face of an abnormal core state of the camera, for example, when the temperature is greater than a predetermined temperature value, the power or gear of a fan is increased, and the power or gear of a heater is reduced; when the temperature is lower than a preset temperature value, reducing the power or gear of the fan and increasing the power or gear of the heater; when the humidity is higher than a preset humidity value, the power or gear of the fan is increased, and the power or gear of the heater is increased.
In the embodiment of the invention, the reinforcement learning model is an on-polarity linear sarsa (linear on-line strategy) reinforcement learning model. When the learning parameters are updated, the reinforcement learning model using the on-policy linear sarsa algorithm can use the current adjusting action and the adjusted state information as samples, so that the self-learning of the camera is realized, and the camera can self-adaptively acquire the optimal adjusting action facing the current state information.
The action value function for fitting the reinforcement learning model is as follows:
Figure BDA0001696296350000061
the above formula S is the core state information, A is the adjustment action, w is the reinforcement learning parameter, and x is the learning sample
Figure BDA0001696296350000062
Figure BDA0001696296350000063
To emphasize the gradient of the parameter w.
The fitted action value function is also a trend function, and is used for fitting the adjustment actions which should be made to the camera under different core states. The camera continuously perfects the function through reinforcement learning, correct adjustment is made by using the function, the function perfecting process is the reinforcement learning process, and the learned sample is the sample generated by the last autonomous adjustment. In the embodiment of the invention, a reinforcement learning parameter w is defined in a fitted action value function, a concrete numerical reinforcement learning parameter w is generated after a learning sample is input, the reinforcement learning parameter w is updated by continuous self-learning by taking the optimization of the parameter as a main target of reinforcement learning, and the functional relation among the core state information S, the adjustment action A and the reinforcement learning parameter w can be constructed by using an on-policy linear sarsa algorithm.
The adjustment actions a include increasing or decreasing the fan power by one step, increasing or decreasing the heater power by one step, increasing or decreasing the fan power by two steps, increasing or decreasing the heater power by two steps, etc., and the various adjustment actions a can be converted into numerical values by quantization for reinforcement learning, for example, can be quantized into binary numbers or hexadecimal numbers, etc.
The objective function of the reinforcement learning parameter w of the reinforcement learning model is:
Figure BDA0001696296350000064
above formula qπ(S, A) is based on the initial policySlightly true action value function, using sarsa (lambda) -based algorithm
Figure BDA0001696296350000071
Estimating:
Figure BDA0001696296350000072
the above formula Rt+nThe gain brought by the adjusting action at the t + n moment, wherein lambda is a hyper-parameter preset by sarsa (lambda) algorithm, gamma is a hyper-parameter preset by reinforcement learning, and Q (S)t+n) Is composed of
Figure BDA0001696296350000074
And pi is the initial strategy. n is a natural number, for example, 1,2,3, etc.
In the embodiment of the present invention, an objective function may be established for determining the optimization degree of the reinforcement learning parameter, where the gain caused by the adjustment action in the above equation is actually determined as the core state of the camera after the adjustment action, for example, a gain value may be defined in the reinforcement learning model, and when the temperature of the camera after the adjustment action is at a suitable temperature value, the gain value is 1, otherwise the gain value is-1.
The hyper-parameter lambda and the hyper-parameter gamma are set by a maintainer, and the maintainer can set the optimal hyper-parameter to improve the performance and effect of the reinforcement learning model, thereby improving the self-regulation capability of the camera.
For the analysis of the objective function j (w), the reinforcement learning parameter w can be subjected to partial derivation, which is denoted as Δ w:
Figure BDA0001696296350000073
step S22: and during automatic adjustment, acquiring the optimal adjustment action by using the current core state information, the reinforcement learning model and the reinforcement learning parameters.
In the embodiment of the invention, the camera can input the core state information acquired at the current moment into the reinforcement learning model, and the optimal adjustment action is obtained through the action value function after reinforcement learning and the reinforcement learning parameter w. And the camera can collect the core state information after executing the optimal adjustment action to obtain the gain for executing the adjustment action, and stores the adjustment action information, the core state information before the adjustment action and the corresponding gain as a reinforcement learning sample. After reinforcement learning is performed, the reinforcement learning model replaces the initial strategy to generate an optimal adjustment action and a corresponding learning sample, so that the reinforcement learning model can continuously perform self-adaptive learning.
In addition, the reinforcement learning model can also adopt high-quality learning samples input by maintenance workers to learn so as to perform fitting more quickly and accurately and make optimal adaptive adjustment.
Example 2
Fig. 3 is a flowchart of a method for adaptively adjusting a camera according to embodiment 2 of the present invention, where the method includes the following steps:
step S31: and inputting the determined core state information and the corresponding adjustment action of the initial strategy into a pre-established reinforcement learning model to generate reinforcement learning parameters.
This step is identical to step S21 described above, and will not be described herein again.
Step S32: and during automatic adjustment, acquiring the optimal adjustment action by using the current core state information, the reinforcement learning model and the reinforcement learning parameters.
This step is identical to step S22 described above, and will not be described herein again.
Step S33: and updating the optimized reinforcement learning parameters according to the optimal adjustment action and the current core state information.
In the embodiment of the invention, after the camera executes the optimal adjustment action each time, the core state information is collected again by using each sensor after a period of time, whether the adjusted core state information achieves the expected effect or not can be judged according to the core state information to generate the corresponding gain, and the optimal adjustment action and the core state information before adjustment are stored as the learning samples.
The formula for updating the optimized reinforcement learning parameter w is as follows:
Figure BDA0001696296350000081
wherein
Figure BDA0001696296350000082
Can be calculated by
Figure BDA0001696296350000083
Figure BDA0001696296350000084
The gradient of the reinforcement parameter w and the corresponding learning sample are obtained, so that the reinforcement learning parameter w can be updated through learning each time, and the camera can be ensured to be adaptively adjusted through reinforcement learning under different environments.
Example 3
Fig. 4 is a structural diagram of a camera adaptive adjustment device provided in embodiment 3 of the present invention.
The camera adaptive adjustment apparatus 400 includes:
a parameter generating module 410, configured to input the multiple determined core state information and the adjustment actions of the corresponding initial strategies into a pre-established reinforcement learning model, and generate reinforcement learning parameters;
the automatic adjustment module 420 is configured to, during automatic adjustment, obtain an optimal adjustment action by using the current core state information, the reinforcement learning model, and the reinforcement learning parameters.
As shown in fig. 5, the adaptive camera adjustment apparatus 400 further includes:
and the parameter updating module 430 is configured to update the optimized reinforcement learning parameter according to the optimal adjustment action and the current core state information.
For more specific description of each module in this embodiment, reference may be made to corresponding parts in the foregoing embodiments, and details are not described here.
In addition, the invention also provides a camera, which comprises a memory and a processor, wherein the memory can be used for storing a computer program, and the processor enables the camera to execute the functions of the modules in the method or the camera adaptive adjustment device by running the computer program.
The memory may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound recording function, an image recording function, etc.), and the like; the storage data area may store data (such as audio data) created according to the use of the camera, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The embodiment also provides a computer storage medium for storing a computer program used in the camera.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A camera adaptive adjustment method is characterized by comprising the following steps:
inputting a plurality of determined core state information and corresponding adjustment actions of the initial strategy into a pre-established reinforcement learning model to generate reinforcement learning parameters;
during automatic adjustment, acquiring an optimal adjustment action by using the current core state information, the reinforcement learning model and the reinforcement learning parameters;
the action value function of the reinforcement learning model for fitting is as follows:
Figure FDA0003227370350000011
the above formula S is the core state information, A is the adjustment action, w is the reinforcement learning parameter, x is the learning sample, that is
Figure FDA0003227370350000012
Figure FDA0003227370350000013
To enhance the gradient of the parameter w, n is a natural number.
2. The adaptive camera adjustment method according to claim 1, wherein the reinforcement learning model is an on-policy linear sarsa reinforcement learning model.
3. The adaptive camera adjustment method according to claim 1, wherein an objective function of the reinforcement learning parameter w of the reinforcement learning model is:
Figure FDA0003227370350000014
above formula qπ(S, A) is a true action value function based on the initial strategy, using sarsa (lambda) algorithm
Figure FDA0003227370350000015
Estimating:
Figure FDA0003227370350000016
the above formula Rt+nThe gain brought by the adjusting action at the t + n moment, wherein lambda is a hyper-parameter preset by sarsa (lambda) algorithm, gamma is a hyper-parameter preset by reinforcement learning, and Q (S)t+n) Is composed of
Figure FDA0003227370350000017
Pi is the initial strategy, Eπ[]Is a mathematical expectation of the initial strategy.
4. The adaptive camera adjustment method according to claim 1, further comprising:
and updating and optimizing the reinforcement learning parameters according to the optimal adjustment action and the current core state information.
5. The adaptive camera adjustment method according to claim 4, wherein the formula for updating and optimizing the reinforcement learning parameters is as follows:
Figure FDA0003227370350000021
the upper type
Figure FDA0003227370350000022
To emphasize the gradient of the parameter w.
6. A camera self-adaptive adjusting device is characterized by comprising:
the parameter generation module is used for inputting the plurality of determined core state information and the adjustment actions of the corresponding initial strategies into a pre-established reinforcement learning model to generate reinforcement learning parameters;
the automatic adjustment module is used for acquiring an optimal adjustment action by utilizing the current core state information, the reinforcement learning model and the reinforcement learning parameters during automatic adjustment;
the action value function of the reinforcement learning model for fitting is as follows:
Figure FDA0003227370350000023
the above formula S is the core state information, A is the adjustment action, w is the reinforcement learning parameter, x is the learning sample, that is
Figure FDA0003227370350000024
Figure FDA0003227370350000025
To enhance the gradient of the parameter w, n is a natural number.
7. The adaptive camera adjustment device according to claim 6, further comprising:
and the parameter updating module is used for updating and optimizing the reinforcement learning parameters according to the optimal adjusting action and the current core state information.
8. A camera comprising a memory for storing a computer program and a processor for executing the computer program to cause the camera to perform the camera adaptive adjustment method according to any one of claims 1 to 5.
9. A computer storage medium characterized by storing a computer program used in the camera head according to claim 8.
CN201810614204.2A 2018-06-14 2018-06-14 Camera self-adaptive adjusting method and device and camera Active CN108803328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810614204.2A CN108803328B (en) 2018-06-14 2018-06-14 Camera self-adaptive adjusting method and device and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810614204.2A CN108803328B (en) 2018-06-14 2018-06-14 Camera self-adaptive adjusting method and device and camera

Publications (2)

Publication Number Publication Date
CN108803328A CN108803328A (en) 2018-11-13
CN108803328B true CN108803328B (en) 2021-11-09

Family

ID=64086936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810614204.2A Active CN108803328B (en) 2018-06-14 2018-06-14 Camera self-adaptive adjusting method and device and camera

Country Status (1)

Country Link
CN (1) CN108803328B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112084925A (en) * 2020-09-03 2020-12-15 厦门利德集团有限公司 Intelligent electric power safety monitoring method and system
CN112734759B (en) * 2021-03-30 2021-06-29 常州微亿智造科技有限公司 Method and device for determining trigger point of flying shooting
CN113568305A (en) * 2021-06-10 2021-10-29 贵州恰到科技有限公司 Control method of deep reinforcement learning model robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010045272A1 (en) * 2008-10-14 2010-04-22 Honda Motor Co., Ltd. Smoothed sarsa: reinforcement learning for robot delivery tasks
WO2016061724A1 (en) * 2014-10-20 2016-04-28 中国科学院自动化研究所 All-weather video monitoring method based on deep learning
CN105549384A (en) * 2015-09-01 2016-05-04 中国矿业大学 Inverted pendulum control method based on neural network and reinforced learning
CN106483852A (en) * 2016-12-30 2017-03-08 北京天恒长鹰科技股份有限公司 A kind of stratospheric airship control method based on Q Learning algorithm and neutral net
CN106766006A (en) * 2017-02-21 2017-05-31 华南理工大学 Air-conditioning system adaptive temperature compensation device and method based on machine vision
CN107272785A (en) * 2017-07-19 2017-10-20 北京上格云技术有限公司 A kind of electromechanical equipment and its control method, computer-readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010045272A1 (en) * 2008-10-14 2010-04-22 Honda Motor Co., Ltd. Smoothed sarsa: reinforcement learning for robot delivery tasks
WO2016061724A1 (en) * 2014-10-20 2016-04-28 中国科学院自动化研究所 All-weather video monitoring method based on deep learning
CN105549384A (en) * 2015-09-01 2016-05-04 中国矿业大学 Inverted pendulum control method based on neural network and reinforced learning
CN106483852A (en) * 2016-12-30 2017-03-08 北京天恒长鹰科技股份有限公司 A kind of stratospheric airship control method based on Q Learning algorithm and neutral net
CN106766006A (en) * 2017-02-21 2017-05-31 华南理工大学 Air-conditioning system adaptive temperature compensation device and method based on machine vision
CN107272785A (en) * 2017-07-19 2017-10-20 北京上格云技术有限公司 A kind of electromechanical equipment and its control method, computer-readable medium

Also Published As

Publication number Publication date
CN108803328A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108803328B (en) Camera self-adaptive adjusting method and device and camera
US20230072644A1 (en) Backup and tiered policy coordination in time series databases
US10482379B2 (en) Systems and methods to perform machine learning with feedback consistency
JP7082461B2 (en) Failure prediction method, failure prediction device and failure prediction program
US10007783B2 (en) Method for protecting an automation component against program manipulations by signature reconciliation
WO2019133316A1 (en) Reconstruction-based anomaly detection
CN108900363B (en) Method, device and system for adjusting working state of local area network
CN108810526B (en) Camera fault intelligent prediction method and device and camera management server
EP3876061A1 (en) Method for validation and selection on machine learning based models for monitoring the state of a machine
CN111935064A (en) Industrial control network threat automatic isolation method and system
CN115774652B (en) Cluster control equipment health monitoring method, equipment and medium based on clustering algorithm
JP6632941B2 (en) Equipment monitoring device and equipment monitoring method
CN117111568B (en) Equipment monitoring method, device, equipment and storage medium based on Internet of things
CN114139767A (en) Health trend prediction method and system for equipment, computer equipment and storage medium
KR102602273B1 (en) System and method for recognizing dynamic anomalies of multiple livestock equipment in a smart farm system
CN117608963A (en) Control method of intelligent self-monitoring computational fluid dynamics simulation solving system
CN116899947A (en) Photovoltaic module cleaning method and system
KR101564888B1 (en) Decentralized fault compensation method and apparatus of large-scale nonlinear systems
CN112612587A (en) Spark platform dynamic resource allocation method for flow analysis
US20230034061A1 (en) Method for managing proper operation of base station and system applying the method
CN116976441A (en) Equipment failure prediction model training method, equipment failure prediction method and equipment failure prediction device
CN113808727B (en) Device monitoring method, device, computer device and readable storage medium
JP2020187616A (en) Plant monitoring model creation device, plant monitoring model creation method, and plant monitoring model creation program
JPWO2021064768A5 (en)
CN115442247B (en) Adopt artificial intelligence data processing fortune dimension case

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant