CN115567763A - Monitoring method, monitoring device and readable storage medium - Google Patents

Monitoring method, monitoring device and readable storage medium Download PDF

Info

Publication number
CN115567763A
CN115567763A CN202110753651.8A CN202110753651A CN115567763A CN 115567763 A CN115567763 A CN 115567763A CN 202110753651 A CN202110753651 A CN 202110753651A CN 115567763 A CN115567763 A CN 115567763A
Authority
CN
China
Prior art keywords
camera device
video
monitoring
camera
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110753651.8A
Other languages
Chinese (zh)
Inventor
郑志羿
张亮明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110753651.8A priority Critical patent/CN115567763A/en
Publication of CN115567763A publication Critical patent/CN115567763A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure provides a monitoring method, a monitoring device and a readable storage medium, which are applied to a mobile terminal, and include: acquiring a first monitoring video through a first camera device; after determining that the first monitoring video contains the object meeting the first condition, starting a second camera device and/or a third camera device; the focal length of the second camera device is greater than that of the first camera device, and the wide angle of the third camera device is greater than that of the first camera device; acquiring a second monitoring video through the second camera device, and/or acquiring a third monitoring video through the third camera device; wherein the second surveillance video is a close-up video of the subject. In the method and the device, the close-up video and the wide-angle video of the object in the common monitoring video can be obtained while the common monitoring video is obtained, so that more multidimensional video information is obtained, and the user experience is improved.

Description

Monitoring method, monitoring device and readable storage medium
Technical Field
The present disclosure relates to the field of computer image recognition technologies, and in particular, to a monitoring method and apparatus, and a readable storage medium.
Background
Along with the gradual upgrade of the configuration of the intelligent equipment, the rear camera device of the intelligent equipment is gradually increased, and how to use the rear camera device of the intelligent equipment to enrich the functions of the intelligent equipment is a technical problem to be solved.
Disclosure of Invention
In view of the above, the present disclosure provides a monitoring method, a monitoring device and a readable storage medium.
According to a first aspect, an embodiment of the present disclosure provides a monitoring method applied to a mobile terminal, including:
acquiring a first monitoring video through a first camera device;
after determining that the first monitoring video contains the object meeting the first condition, starting a second camera device and/or a third camera device; the focal length of the second camera device is larger than that of the first camera device, and the wide angle of the third camera device is larger than that of the first camera device;
acquiring a second monitoring video through the second camera device, and/or acquiring a third monitoring video through the third camera device; wherein the second surveillance video is a close-up video of the object.
In an embodiment, after capturing a close-up video of the object by the second camera, the method comprises:
and when the object is determined not to meet the first condition, or when the object meeting the first condition is not included in the first monitored video, or when the object meeting the first condition is not included in the third monitored video, turning off the second imaging device and/or the third imaging device.
In an embodiment, the first condition comprises at least one of:
the shape of the object corresponds to the shape of a set target;
the moving speed of the object is greater than a set speed;
the object is a set target.
In one embodiment, after determining that the first surveillance video includes an object meeting the first condition, the method includes:
and when the ambient illumination is lower than the set illumination, increasing the sensitivity of the second camera device and/or the third camera device or starting a light supplement lamp.
In one embodiment, the method comprises: the object is identified from the third surveillance video, an adjustment angle of the second imaging device is determined according to the position of the object in the third surveillance video and the current imaging direction of the second imaging device, and the imaging direction of the second imaging device is changed according to the adjustment angle.
According to a second aspect, embodiments of the present disclosure provide a monitoring device, including:
the first acquisition module is used for acquiring a first monitoring video through a first camera device;
the first control module is used for starting the second camera device and/or the third camera device after determining that the first monitoring video contains the object meeting the first condition; the focal length of the second camera device is greater than that of the first camera device, and the wide angle of the third camera device is greater than that of the first camera device;
the second acquisition module is used for acquiring a second monitoring video through the second camera device and/or acquiring a third monitoring video through the third camera device; wherein the second surveillance video is a close-up video of the subject.
In an embodiment, after capturing a close-up video of the object by the second camera device, the device comprises:
and a second control module configured to turn off the second image capture device and/or the third image capture device when it is determined that the object does not meet the first condition, or when the object meeting the first condition is not included in the first surveillance video, or when the object meeting the first condition is not included in the third surveillance video.
In an embodiment, the first condition comprises at least one of:
the shape of the object corresponds to the shape of a set target;
the moving speed of the object is greater than a set speed;
the object is a set target.
In one embodiment, the first control module includes:
and when the ambient illumination is lower than the set illumination, increasing the sensitivity of the second camera device and/or the third camera device or starting a light supplement lamp.
In one embodiment, the apparatus comprises:
an identification module to identify the object from the third surveillance video;
a determining module, configured to determine an adjustment angle of the second camera according to a position of the object in the third surveillance video and a current camera shooting direction of the second camera;
and the adjusting module is used for changing the shooting direction of the second shooting device according to the adjusting angle.
According to a third aspect, an embodiment of the present disclosure provides a monitoring device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the monitoring method.
According to a fourth aspect, embodiments of the present disclosure provide a non-transitory computer readable storage medium having stored thereon executable instructions that, when executed by a processor, implement the steps of the monitoring method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: after the first camera device is used for collecting the first monitoring video and an object of a first condition contained in the first monitoring video is determined, the second camera device and/or the third camera device are started, the second monitoring video is collected through the second camera device, and/or the third monitoring video is obtained through the third camera device.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a monitoring method according to an exemplary embodiment;
FIG. 2 is a block diagram illustrating a monitoring system according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a monitoring method according to an exemplary embodiment;
FIG. 4 is a block diagram illustrating a monitoring device according to an exemplary embodiment;
FIG. 5 is a block diagram illustrating a monitoring device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the embodiments in this disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the embodiments of the disclosure, as detailed in the claims that follow.
The terminology used in the embodiments of the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present disclosure. As used in the disclosed embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
In some technical solutions, a camera device may be used to record a motion trajectory of an object, but only a surveillance video about the motion trajectory of the object may be obtained, and the surveillance function is relatively single.
The embodiment of the disclosure provides a monitoring method applied to a mobile terminal. The mobile terminal is a mobile phone, a desktop computer, a tablet computer and the like.
Referring to fig. 1, fig. 1 is a flow chart illustrating a monitoring method according to an exemplary embodiment.
As shown in fig. 1, the method includes:
step S101, a first monitoring video is collected through a first camera device;
step S102, after determining that the first monitoring video comprises the object meeting the first condition, starting a second camera device and/or a third camera device; the focal length of the second camera device is greater than that of the first camera device, and the wide angle of the third camera device is greater than that of the first camera device;
step S103, acquiring a second monitoring video through the second camera device, and/or acquiring a third monitoring video through the third camera device; wherein the second surveillance video is a close-up video of the subject.
In one possible implementation, the first camera device is a low-power-consumption normally-open motion detection camera used for long-time shooting; the second camera shooting device is a large-angle cloud platform long-focus camera and is used for shooting close-up tracking videos of the object; the third camera device is an ultra wide angle camera and is used for shooting panoramic video.
In one possible embodiment, the first image capture device consumes less power than the second image capture device; the power consumption of the first image pickup device is smaller than that of the third image pickup device.
In a possible implementation manner, the configuration of the mobile terminal monitoring system is shown in fig. 2, where a is a low-power consumption normally-open motion detection camera, b is a wide-angle stage telephoto camera, c is an ultra-wide-angle camera, and d is a light supplement lamp for supplementing light in a low-illumination environment.
In the embodiment of the disclosure, after the first surveillance video is acquired by the first camera device and an object of a first condition included in the first surveillance video is determined, the second camera device and/or the third camera device are started, the second surveillance video is acquired by the second camera device, and/or the third surveillance video is acquired by the third camera device, so that a close-up video and a wide-angle video for the object in the common surveillance video can be obtained while the common surveillance video is acquired, more multidimensional video information is obtained, and user experience is improved.
The embodiment of the disclosure provides a monitoring method applied to a mobile terminal. This method includes the method shown in fig. 1, and:
after capturing a close-up video of the object by the second camera device, the method includes:
and when the object is determined not to meet the first condition, or when the object meeting the first condition is not included in the first monitoring video, turning off the second camera device and/or the third camera device.
In the embodiment of the present disclosure, when it is determined that the object does not meet the first condition, or when the object meeting the first condition is not included in the first surveillance video, the second image pickup device and/or the third image pickup device is turned off. Therefore, when the object is determined not to accord with the first condition or the first monitoring video does not contain the object which accords with the first condition, the second camera device and/or the third camera device can be turned off in time, power is saved, and the first camera device continuously collects the first monitoring video.
The embodiment of the disclosure provides a monitoring method applied to a mobile terminal. This method includes the method shown in fig. 1, and:
the first condition includes at least one of:
the shape of the object corresponds to the shape of a set target;
the moving speed of the object is greater than a set speed;
the object is a set target.
In an embodiment, the first condition may be that the shape of the object corresponds to the shape of a set target, for example, the set target is a cat, and when the shape of the cat is monitored in the first surveillance video, a close-up video of the cat may be captured by the second camera device.
The first condition may be that the moving speed of the object is greater than the set speed, and when an object with a fast moving speed is included in the first surveillance video, a close-up video of the object may be captured by the second camera. .
The first condition may be that the object is a set target, for example, the set target is a specific person, and when the person is monitored in the first surveillance video, a close-up video of the object may be captured by the second camera device.
In the embodiment of the present disclosure, a trigger condition for starting the second image pickup apparatus and/or the third image pickup apparatus required for use may be set.
The embodiment of the disclosure provides a monitoring method applied to a mobile terminal. This method includes the method shown in fig. 1, and:
after determining that the first surveillance video contains an object meeting a first condition, the method comprises:
and when the ambient illumination is lower than the set illumination, the sensitivity of the second camera device and/or the third camera device is increased, or a light supplement lamp is started.
In the embodiment of the disclosure, when the ambient illuminance is lower than the set illuminance, the sensitivity of the second camera device and/or the third camera device is increased, or the light supplement lamp is started. Because environment illuminance is low when excessively, camera device will not gather comparatively clear video, consequently, sets up and sets for the illuminance, adjusts camera device's sensitivity, perhaps the light filling, can make camera device also can gather comparatively clear picture when illuminance is lower.
In some technical solutions, a camera device may be used to record the motion trajectory of an object, but the analysis capability for the object cannot be increased, and when the object is camouflaged, it is difficult to clearly identify the identity or species.
The embodiment of the disclosure provides a monitoring method applied to a mobile terminal. This method includes the method shown in fig. 1, and:
the object is identified from the third surveillance video, an adjustment angle of the second imaging device is determined according to the position of the object in the third surveillance video and the current imaging direction of the second imaging device, and the imaging direction of the second imaging device is changed according to the adjustment angle.
In an embodiment, the adjustment angle is an angle in three-dimensional space.
In the embodiment of the disclosure, the problem of poor local feature resolution is solved by using the second camera device and the third camera device, and the picture of the third camera device is used for identifying a person, so that the angle required to be adjusted by the second camera device is calculated, and the second camera device can accurately shoot the local feature of the object.
The details are described below with reference to specific examples.
Detailed description of the preferred embodiment
The method for monitoring through the mobile phone comprises the following steps:
step S200, collecting a first monitoring video by a low-power-consumption normally-open motion detection camera;
step S201, judging whether the first monitoring video comprises a person with a moving speed larger than a set speed or not, if so, executing step S202, otherwise, executing step S201 in a circulating manner;
step S202, opening an ultra wide angle camera and a wide angle frustum tele camera;
step S203, judging whether the ambient illumination is lower than a set condition; if yes, executing step S204, if no, executing step S205;
step S204, adjusting the sensitivity of the corresponding camera device or turning on a light supplement lamp;
s205, shooting a panoramic video by using an ultra-wide-angle camera, and shooting a close-up video of a person by using a wide-angle cloud platform long-focus camera;
step S206, judging whether the panoramic video comprises the person, if not, executing step S207; if yes, continuing to execute step S205 and step S206;
step S207, closing the ultra-wide-angle camera and the wide-angle cloud platform tele camera, and returning to the step S201;
the embodiment of the disclosure provides a monitoring device. Referring to fig. 4, fig. 4 is a block diagram illustrating a monitoring device according to an exemplary embodiment. As shown in fig. 4, the apparatus includes:
a first collecting module 11, configured to collect a first surveillance video through a first camera;
the first control module 12 is configured to start the second camera and/or the third camera after determining that the first surveillance video includes an object meeting a first condition; the focal length of the second camera device is greater than that of the first camera device, and the wide angle of the third camera device is greater than that of the first camera device;
the second acquisition module 13 is configured to acquire a second surveillance video through the second camera device, and/or acquire a third surveillance video through the third camera device; wherein the second surveillance video is a close-up video of the subject.
In an embodiment of the present disclosure, there is provided a monitoring device, the device including the device shown in fig. 4, and as shown in fig. 4:
the device comprises:
and the second control module is used for turning off the second camera device and/or the third camera device when the object is determined not to meet the first condition, or when the object meeting the first condition is not included in the first monitoring video, or when the object meeting the first condition is not included in the third monitoring video.
In an embodiment of the present disclosure, there is provided a monitoring device, the device including the device shown in fig. 4, and as shown in fig. 4:
the first condition includes at least one of:
the shape of the object corresponds to the shape of a set target;
the moving speed of the object is greater than a set speed;
the object is a set target.
In an embodiment of the present disclosure, there is provided a monitoring device, the device including the device shown in fig. 4, and as shown in fig. 4:
the first control module 12 is configured to increase the sensitivity of the second camera and/or the third camera or start a light supplement lamp when the ambient illuminance is lower than the set illuminance.
In an embodiment of the present disclosure, there is provided a monitoring device, the device including the device shown in fig. 4, and as shown in fig. 4:
the device comprises:
an identification module to identify the object from the third surveillance video;
a determining module, configured to determine an adjustment angle of the second camera according to a position of the object in the third surveillance video and a current camera shooting direction of the second camera;
and the adjusting module is used for changing the shooting direction of the second shooting device according to the adjusting angle.
The embodiment of the present disclosure provides a monitoring device, which is applied to a terminal, and includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the monitoring method.
The disclosed embodiments provide a non-transitory computer readable storage medium having stored thereon executable instructions that, when executed by a processor, implement the steps of the monitoring method.
Fig. 5 is a block diagram illustrating a monitoring device 500 according to an exemplary embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 500 may include one or more of the following components: processing component 502, memory 504, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when apparatus 500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the apparatus 500, the sensor assembly 514 may also detect a change in the position of the apparatus 500 or a component of the apparatus 500, the presence or absence of user contact with the apparatus 500, orientation or acceleration/deceleration of the apparatus 500, and a change in the temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi,4G or 5G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the embodiments of the disclosure following, in general, the principles of the embodiments of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the embodiments pertain. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims.
It is to be understood that the embodiments of the present disclosure are not limited to the precise arrangements described above and shown in the drawings, and that various combinations, substitutions, modifications, and changes of the method steps or terminal assemblies disclosed in the present disclosure may be made without departing from the scope thereof, and are intended to be included within the scope of the present disclosure. The scope of the disclosure as claimed is limited by the claims appended hereto.
It should be noted that, in the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.

Claims (12)

1. A monitoring method is applied to a mobile terminal, and is characterized by comprising the following steps:
acquiring a first monitoring video through a first camera device;
after determining that the first monitoring video contains an object meeting a first condition, starting a second camera device and/or a third camera device; the focal length of the second camera device is greater than that of the first camera device, and the wide angle of the third camera device is greater than that of the first camera device;
acquiring a second monitoring video through the second camera device, and/or acquiring a third monitoring video through the third camera device; wherein the second surveillance video is a close-up video of the subject.
2. The method of claim 1,
after capturing a close-up video of the object by the second camera, the method comprises:
and when the object is determined not to meet the first condition, or when the object meeting the first condition is not contained in the first monitoring video, or when the object meeting the first condition is not contained in the third monitoring video, turning off the second camera device and/or the third camera device.
3. The method of claim 1 or 2,
the first condition includes at least one of:
the shape of the object corresponds to the shape of a set target;
the moving speed of the object is greater than a set speed;
the object is a set target.
4. The method of claim 1,
after determining that the first surveillance video contains an object meeting a first condition, the method comprises:
and when the ambient illumination is lower than the set illumination, the sensitivity of the second camera device and/or the third camera device is increased, or a light supplement lamp is started.
5. The method of claim 1,
the method comprises the following steps:
the object is identified from the third surveillance video, an adjustment angle of the second imaging device is determined according to the position of the object in the third surveillance video and the current imaging direction of the second imaging device, and the imaging direction of the second imaging device is changed according to the adjustment angle.
6. A monitoring device, comprising:
the first acquisition module is used for acquiring a first monitoring video through a first camera device;
the first control module is used for starting the second camera device and/or the third camera device after determining that the first monitoring video contains the object meeting the first condition; the focal length of the second camera device is greater than that of the first camera device, and the wide angle of the third camera device is greater than that of the first camera device;
the second acquisition module is used for acquiring a second monitoring video through the second camera device and/or acquiring a third monitoring video through the third camera device; wherein the second surveillance video is a close-up video of the subject.
7. The apparatus of claim 6,
the device comprises:
and the second control module is used for turning off the second camera device and/or the third camera device when the object is determined not to meet the first condition, or when the object meeting the first condition is not included in the first monitoring video, or when the object meeting the first condition is not included in the third monitoring video.
8. The apparatus of claim 6 or 7,
the first condition includes at least one of:
the shape of the object corresponds to the shape of a set target;
the moving speed of the object is greater than a set speed;
the object is a set target.
9. The apparatus of claim 6,
the first control module is used for increasing the light sensitivity of the second camera device and/or the third camera device or starting the light supplementing lamp when the ambient illumination is lower than the set illumination.
10. The apparatus of claim 6,
the device comprises:
an identification module to identify the object from the third surveillance video;
a determining module, configured to determine an adjustment angle of the second camera according to a position of the object in the third surveillance video and a current camera shooting direction of the second camera;
and the adjusting module is used for changing the shooting direction of the second shooting device according to the adjusting angle.
11. The utility model provides a monitoring devices, is applied to mobile terminal, its characterized in that includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the monitoring method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium having stored thereon executable instructions, wherein the executable instructions when executed by a processor implement the steps of the monitoring method of any one of claims 1 to 5.
CN202110753651.8A 2021-07-02 2021-07-02 Monitoring method, monitoring device and readable storage medium Pending CN115567763A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110753651.8A CN115567763A (en) 2021-07-02 2021-07-02 Monitoring method, monitoring device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110753651.8A CN115567763A (en) 2021-07-02 2021-07-02 Monitoring method, monitoring device and readable storage medium

Publications (1)

Publication Number Publication Date
CN115567763A true CN115567763A (en) 2023-01-03

Family

ID=84736959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110753651.8A Pending CN115567763A (en) 2021-07-02 2021-07-02 Monitoring method, monitoring device and readable storage medium

Country Status (1)

Country Link
CN (1) CN115567763A (en)

Similar Documents

Publication Publication Date Title
CN106572299B (en) Camera opening method and device
KR101712301B1 (en) Method and device for shooting a picture
EP3136710A1 (en) Method and apparatus for controlling photography of unmanned aerial vehicle
EP3163411A1 (en) Method, device and apparatus for application switching
CN108668080B (en) Method and device for prompting degree of dirt of lens and electronic equipment
CN107463052B (en) Shooting exposure method and device
CN107480785B (en) Convolutional neural network training method and device
CN111984347A (en) Interaction processing method, device, equipment and storage medium
CN108040213B (en) Method and apparatus for photographing image and computer-readable storage medium
CN109522058B (en) Wake-up method, device, terminal and storage medium
CN108629814B (en) Camera adjusting method and device
CN107809588B (en) Monitoring method and device
CN109214175B (en) Method, device and storage medium for training classifier based on sample characteristics
CN108154090B (en) Face recognition method and device
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN107247535B (en) Intelligent mirror adjusting method and device and computer readable storage medium
CN107948876B (en) Method, device and medium for controlling sound box equipment
CN107832377B (en) Image information display method, device and system, and storage medium
CN112017598A (en) Backlight brightness adjusting method and device
CN110968155A (en) Full-screen terminal, operation execution method and device based on full-screen terminal
CN108769513B (en) Camera photographing method and device
CN115567763A (en) Monitoring method, monitoring device and readable storage medium
CN113254300A (en) Temperature control method and device and storage medium
CN114187874A (en) Brightness adjusting method and device and storage medium
CN113315904A (en) Imaging method, imaging device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination