CN111355943A - Monitoring equipment, method and device, storage medium and electronic device - Google Patents

Monitoring equipment, method and device, storage medium and electronic device Download PDF

Info

Publication number
CN111355943A
CN111355943A CN201811566026.7A CN201811566026A CN111355943A CN 111355943 A CN111355943 A CN 111355943A CN 201811566026 A CN201811566026 A CN 201811566026A CN 111355943 A CN111355943 A CN 111355943A
Authority
CN
China
Prior art keywords
image
depth
camera
field range
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811566026.7A
Other languages
Chinese (zh)
Inventor
刘若鹏
赵治亚
杨亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN KUANG-CHI SUPER MATERIAL TECHNOLOGY Co.,Ltd.
Original Assignee
Shenzhen Kuang Chi Space Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kuang Chi Space Technology Co Ltd filed Critical Shenzhen Kuang Chi Space Technology Co Ltd
Priority to CN201811566026.7A priority Critical patent/CN111355943A/en
Priority to PCT/CN2019/112424 priority patent/WO2020125185A1/en
Publication of CN111355943A publication Critical patent/CN111355943A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a monitoring device, a method and a device, a storage medium and an electronic device, wherein the device comprises: the first camera equipment is used for shooting a first image in a first depth range; the second camera shooting equipment is used for shooting a second image in a second depth-of-field range; the control assembly is used for acquiring a first image and a second image, and the first camera equipment and the second camera equipment are electrically connected with the control assembly; wherein the maximum value of the first depth of field range is greater than the maximum value of the second depth of field range, and the minimum value of the first depth of field range is greater than the minimum value of the second depth of field range. By the method and the device, the problem of insufficient depth of field of the image acquired in a monitoring scene by adopting a long focal length in the related technology is solved, and the effect of increasing the depth of field of the image in a long focal length application scene is achieved.

Description

Monitoring equipment, method and device, storage medium and electronic device
Technical Field
The invention relates to the field of photoelectricity, in particular to monitoring equipment, a monitoring method, a monitoring device, a storage medium and an electronic device.
Background
According to the optical principle, the depth of field of the image realized by the photoelectric sensing node in the conventional image pickup apparatus or the monitoring apparatus in the related art decreases as the focal length of the image pickup lens increases. In a scene related to monitoring of remote personnel or vehicles, in order to realize identification of the remote personnel or vehicles, a telephoto lens is necessarily adopted; however, according to the above principle, the use of the telephoto lens results in a reduced depth of field of the image, and thus the effective information in the captured image is reduced.
In view of the above problem in the related art that the depth of field of the image obtained in the monitoring scene with a long focal length is insufficient, no effective solution has been proposed in the related art.
Disclosure of Invention
The embodiment of the invention provides monitoring equipment, a monitoring method, a monitoring device, a storage medium and an electronic device, which are used for at least solving the problem that the depth of field of an image acquired in a monitoring scene by adopting a long focal length is insufficient in the related art.
According to an embodiment of the present invention, there is provided a monitoring apparatus including:
the first camera equipment is used for shooting a first image in a first depth range;
the second camera shooting equipment is used for shooting a second image in a second depth-of-field range;
the control assembly is used for acquiring the first image and the second image, and the first camera shooting device and the second camera shooting device are electrically connected with the control assembly;
wherein the maximum value of the first depth of field range is greater than the maximum value of the second depth of field range, and the minimum value of the first depth of field range is greater than the minimum value of the second depth of field range.
According to another embodiment of the present invention, there is provided a monitoring method using the monitoring device in the above embodiment, the method including:
acquiring the first image shot by the first camera shooting device and the second image shot by the second camera shooting device; wherein the first image is located within the first depth range, the second image is located within the second depth range, a maximum value of the first depth range is greater than a maximum value of the second depth range, and a minimum value of the first depth range is greater than a minimum value of the second depth range;
and splicing the first image and the second image.
According to another embodiment of the present invention, there is provided a monitoring apparatus to which the monitoring device in the above embodiments is applied, the apparatus including:
an acquisition module configured to acquire the first image captured by the first image capturing apparatus and the second image captured by the second image capturing apparatus; wherein the first image is located within the first depth range, the second image is located within the second depth range, a maximum value of the first depth range is greater than a maximum value of the second depth range, and a minimum value of the first depth range is greater than a minimum value of the second depth range;
and the splicing module is used for splicing the first image and the second image.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, because the images in different depth of field ranges can be respectively acquired by the first camera device and the second camera device, the problem of insufficient depth of field of the image acquired in a monitoring scene by adopting a long focal length in the related art can be solved, and the effect of increasing the depth of field of the image in a long focal length application scene is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a functional schematic diagram of a monitoring device provided in accordance with an embodiment of the present invention;
fig. 2 is a schematic view of an application scenario of a monitoring device according to an embodiment of the present invention;
FIG. 3 is a control schematic of a monitoring device provided in accordance with an embodiment of the present invention;
FIG. 4 is a circuit diagram of an embedded hardware platform connected to an external device according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of embedded hardware platform connection power control according to an embodiment of the present invention;
FIG. 6 is a circuit schematic of a processing unit provided in accordance with a specific embodiment of the present invention;
FIG. 7 is a schematic circuit diagram of a first ISP input port in an input unit provided in accordance with a specific embodiment of the present invention;
FIG. 8 is a schematic circuit diagram of a second ISP input port in an input unit provided in accordance with a specific embodiment of the present invention;
FIG. 9 is a circuit schematic of a memory cell connection provided in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram of a first DDR circuit in a memory cell according to an embodiment of the invention;
FIG. 11 is a schematic diagram of a second DDR circuit in a memory cell according to an embodiment of the invention;
fig. 12 is a circuit schematic of a communication unit provided in accordance with a specific embodiment of the present invention;
fig. 13 is a circuit schematic diagram of an ethernet unit provided in accordance with a specific embodiment of the present invention;
FIG. 14 is a circuit schematic of a power supply unit provided in accordance with a specific embodiment of the present invention;
FIG. 15 is a flow chart of a monitoring method provided in accordance with an embodiment of the present invention;
fig. 16 is a block diagram of a monitoring apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
In this embodiment, a monitoring device is provided, fig. 1 is a schematic functional diagram of the monitoring device provided according to an embodiment of the present invention, and fig. 2 is a schematic application scenario diagram of the monitoring device provided according to an embodiment of the present invention; as shown in fig. 1 and 2, the monitoring apparatus includes:
a first image pickup apparatus 102 configured to pick up a first image within a first depth range;
a second image pickup device 104 configured to pick up a second image within a second depth range;
the control component 106 is used for acquiring a first image and a second image, and the first camera device and the second camera device are electrically connected with the control component;
wherein the maximum value of the first depth of field range is greater than the maximum value of the second depth of field range, and the minimum value of the first depth of field range is greater than the minimum value of the second depth of field range.
Through the device in the embodiment, since the images in different depth of field ranges can be respectively acquired through the first camera device and the second camera device, the device in the embodiment can solve the problem that the depth of field of the image acquired in the related art in the long-focus monitoring scene is insufficient, and achieves the effect of increasing the depth of field of the image in the long-focus application scene.
It should be further noted that the first depth of field range and the second depth of field range are both distance ranges, the maximum value and the minimum value of the first depth of field range are used for indicating a farther distance range, and the maximum value and the minimum value of the second depth of field range are used for indicating a closer distance range; for example, the positions of the first camera device and the second camera device are taken as the origin, the first depth of field range may be 70 to 150 meters, the second depth of field range may be 15 to 70 meters, and the first depth of field range and the second depth of field range together form a long depth of field for monitoring.
Meanwhile, in the related art, in order to improve the depth of field parameter of a single image pickup apparatus, the depth of field parameter is often realized by adding lenses, and when light is reflected and refracted by a plurality of lenses, the light flux is reduced; the device in this embodiment does not involve the change to the ray path in the shooting in the implementation process, and no matter first camera equipment and second camera equipment all make light according to original route in the shooting process, therefore, for first image and second image, luminous flux can not take place any extra loss, and then make the SNR of the image that obtains of shooing guarantee, especially under the comparatively complicated scene of light environment such as shimmer, low light, can make the definition of image can show the improvement.
Under outdoor monitoring scene, influenced by time, weather and even peripheral building, the light environment that supervisory equipment corresponds is often comparatively complicated, consequently, equipment in this embodiment is carrying out the monitoring process to personnel or vehicle, when effectively promoting the picture depth of field, can also effectively guarantee the definition of picture to make monitoring effect obtain further improvement.
In addition, in order to implement the embodiment, the maximum value of the first depth-of-field range is greater than the maximum value of the second depth-of-field range, and the minimum value of the first depth-of-field range is greater than the minimum value of the second depth-of-field range, which can be implemented by adjusting a relative position relationship between the first image capturing apparatus and the second image capturing apparatus, for example, adjusting a relative angle between the first image capturing apparatus and the second image capturing apparatus at the same installation position, or respectively installing the first image capturing apparatus and the second image capturing apparatus at different installation positions, which is not limited in this disclosure.
In an alternative embodiment, the shooting orientation of the first camera device 102 and the shooting orientation of the second camera device 104 form a preset included angle, and the preset included angle is an acute angle.
It should be further noted that, the preset included angle may be set to enable the first camera device and the second camera device to achieve different orientations at the same installation position, and in order to enable a first depth of field range corresponding to the first camera device to be larger than a second depth of field range corresponding to the second camera device, the first camera device may be oriented to a far end, and the second camera device may be oriented to a near end.
In an alternative embodiment, the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range.
It should be further noted that the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range, and is specifically used for indicating that the first depth of field range and the second depth of field range are connected with each other, that is, no redundant gap or overlapping area exists between the first depth of field range and the second depth of field range. By the technical scheme, the first image and the second image are not overlapped with each other and do not have fuzzy parts, so that the definition of a new image is obtained by subsequent splicing of the first image and the second image, and the processing complexity of the control assembly is reduced.
In order to realize that the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range, namely, the first depth of field range and the second depth of field range are mutually connected, the included angle between the first camera equipment and the second camera equipment can be specifically adjusted under the condition that the focal lengths of the first camera equipment and the second camera equipment are fixed, namely, the orientation of the first camera equipment and the orientation of the second camera equipment are adjusted to be realized.
In an alternative embodiment, the first camera device 102 is located above the second camera device 104, and the upper field of view of the first camera device 102 is in a horizontal direction.
It should be further noted that the first image capturing apparatus is located above the second image capturing apparatus, which means that the first image capturing apparatus captures images farther than the second image capturing apparatus, and therefore the first image capturing apparatus is located above the height position of the second image capturing apparatus, rather than necessarily above the second image capturing apparatus in the vertical direction.
In an optional embodiment, when the first image capturing apparatus and the second image capturing apparatus are respectively installed at different installation positions to achieve that the maximum value of the first depth of field range is greater than the maximum value of the second depth of field range and the minimum value of the first depth of field range is greater than the minimum value of the second depth of field range in the above embodiment, the first image capturing apparatus and the second image capturing apparatus may be respectively installed at a shooting position of the first depth of field range and a shooting position of the second depth of field range.
In an alternative embodiment, the first image capturing apparatus 102 and the second image capturing apparatus 104 are electrically connected to the control component 106, and include:
the first image pickup apparatus 102 and the second image pickup apparatus 104 are each independently electrically connected to the control unit 106.
It should be further described that the first image capturing apparatus and the second image capturing apparatus are respectively and independently electrically connected to the control assembly, and specifically, the first image capturing apparatus and the second image capturing apparatus are instructed to respectively send to the control assembly for processing after acquiring the first image and the second image.
In an alternative embodiment, the control component 106 is further configured to:
and splicing the acquired first image and the second image.
In an alternative embodiment, the control component 106 is further configured to:
the first image capturing apparatus 102 is instructed to capture a first image at a first time, and the second image capturing apparatus 104 is instructed to capture a second image at a second time, wherein the first time and the second time are at the same time.
Through the technical scheme, the first image pickup device and the second image pickup device can carry out exposure shooting at the same time, so that the first image and the second image can be kept uniform in parameters such as brightness and contrast.
In an alternative embodiment, the first image pickup apparatus 102 includes: a first imaging lens and a first photosensor, and the second imaging apparatus 104 includes: a second camera lens and a second photoelectric sensor;
the first camera lens is a long-focus lens, and the second camera lens is a short-focus lens; the first photoelectric sensor and the second photoelectric sensor are electrically connected with the control component.
It should be further noted that, in this embodiment, the shooting orientation of the first image capturing apparatus, the shooting orientation of the second image capturing apparatus, or the included angle between the first image capturing apparatus and the second image capturing apparatus all refer to the orientation or angle setting of the first image capturing lens and the second image capturing lens, and the first photosensor and the second photosensor may be integrated on the corresponding image capturing lenses. Through the matching use of the long-focus lens and the short-focus lens in the technical scheme, the acquisition of images with different depths of field can be further realized through the first camera equipment and the second camera equipment.
In an alternative embodiment, the control assembly comprises:
the device comprises a processing unit and an input unit, wherein the processing unit is electrically connected with the input unit;
the input unit comprises a first image signal processing ISP input port and a second image signal processing ISP input port, the first photoelectric sensor is electrically connected to the first ISP input port, and the second photoelectric sensor is electrically connected to the second ISP input port.
The first photoelectric sensor is arranged to transmit a first image to a first ISP input port, and the first ISP input port is used for performing ISP processing on the first image to obtain first color-coded YUV data;
the second photoelectric sensor is arranged to transmit a second image to a second ISP input port, and the second ISP input port is used to perform ISP processing on the second image to obtain second color-coded YUV data.
It should be further explained that the processing unit is an electronic component with a data processing function, including but not limited to a CPU (central processing unit), a DSP (digital signal processor), an MCU (microprocessor), and the like, and is used for implementing control and management of the control component and the first and second image capturing devices; the ISP input port in the input unit can process the first image and the second image. The control component may further include a memory unit, a flash memory unit, an ethernet unit, a power supply unit, and other necessary units or modules for implementing the operation of the device.
To further illustrate the monitoring device in this embodiment, the following is described in detail by way of specific embodiments:
fig. 2 and 3 show an image pickup portion in the present embodiment, the first image pickup apparatus 102 includes a first image pickup lens 1021 and a first photosensor 1022 (i.e., a Sensor1 in fig. 3), fig. 3 is a control schematic diagram of a monitoring apparatus provided according to the present embodiment, and fig. 3 shows a specific configuration of the first image pickup apparatus 102 and the second image pickup apparatus 104; wherein, the corresponding parameters of the first camera lens 1021 are fixed focus 1/1.8 inch and 70mm, the depth of field is 70-180 meters, the first photoelectric sensor 1022 adopts 600w pixel CMOS, and the pixel is 2.4 μm; as shown in fig. 2 and fig. 3, the second image capturing device 104 includes a second image capturing lens 1041 and a second photosensor 1042 (i.e. Sensor2 in fig. 3), wherein the second image capturing lens 1041 has corresponding parameters of a fixed focus of 1/1.8 inch and 25mm, a depth of field of 15 to 70 meters, and the second photosensor 1042 also employs a 600w pixel CMOS with 2.4 μm pixels. As shown in fig. 2, the first image capturing apparatus 102 and the second image capturing apparatus 104 adopt a stacked structure, and an included angle of 3.9 degrees is formed between the first image capturing apparatus 102 and the second image capturing apparatus 104; during the actual installation process, it is necessary to ensure that the first image pickup apparatus located above is kept horizontal. In this way, the first camera device can be used to acquire images within a range of 70 to 180 meters of depth of field, and the second camera device can be used to acquire images within a range of 15 to 70 meters of depth of field, thereby forming an image that can be acquired within a range of a long depth of field (15 to 180 meters). It should be further noted that the size of the related component is only one choice according to the actual situation in this embodiment, and components of other models or sizes may be selected according to actual needs in different examples. In this embodiment, the first photosensor 1022 and the second photosensor 1042 are preferably sensors of type IMX 178. As shown in fig. 3, in the present embodiment, a haisi Hi3519V101 embedded hardware platform is used as a hardware platform of the monitoring device, a control component 106 in the above embodiment is disposed on the platform, and the control component 106 specifically includes: the processing unit 1061, the input unit 1062, the memory unit 1063, the flash memory unit 1064, the communication module 1065, the ethernet unit 1066, and the power supply unit 1067, wherein the input unit 1062, the memory unit 1063, the flash memory unit 1064, the communication module 1065, the ethernet unit 1066, and the power supply unit 1067 are electrically connected to the processing unit 1061, respectively; FIG. 4 is a circuit diagram of an embedded hardware platform connected to an external device according to an embodiment of the present invention; fig. 5 is a schematic diagram of power control of the embedded hardware platform according to an embodiment of the present invention, and the specific operation of the embedded hardware platform or the connection principle with the external device is shown in fig. 4 and fig. 5.
The processing unit 1061 is an electronic component with a data processing function, including but not limited to a CPU (central processing unit), a DSP (digital signal processor), an MCU (micro controller unit), etc., and is used to perform a stitching process on the first image and the second image, and control and manage related hardware, fig. 6 is a schematic circuit diagram of the processing unit according to an embodiment of the present invention, and the connection manner of the processing unit 1061 is as shown in fig. 6.
The input unit 1062 includes a first ISP input port and a second ISP input port, and is configured to receive image information; the first ISP input port and the second ISP input port may be integrated directly into the processing unit 1061, i.e. the CPU. Fig. 7 is a schematic circuit diagram of a first ISP input port of an input unit according to an embodiment of the present invention, and a connection operation manner of the first ISP input port of the input unit 1062 and a corresponding first photosensor is shown in fig. 7. Fig. 8 is a schematic circuit diagram of a second ISP input port in the input unit according to the embodiment of the present invention, and the connection between the second ISP input port in the input unit 1062 and the corresponding second photosensor operates as shown in fig. 8.
The memory unit 1063 includes a first double rate synchronous dynamic random access memory DDR and a second double rate synchronous dynamic random access memory DDR, and fig. 9 is a schematic circuit diagram of a memory unit connection according to an embodiment of the present invention, where the connection between the first DDR or the second DDR and the hardware platform is as shown in fig. 9. Fig. 10 is a schematic circuit diagram of a first DDR in a memory unit according to an embodiment of the present invention, and fig. 11 is a schematic circuit diagram of a second DDR in a memory unit according to an embodiment of the present invention, where the first DDR and the second DDR operate as shown in fig. 10 and fig. 11, respectively. The Flash memory unit 1064 includes a serial peripheral interface Flash SPI Nand Flash, and specifically, the Flash memory unit 1064 may be connected to a corresponding peripheral storage device through the SPI Nand Flash; the communication unit 1065 includes a uart alarm IO TF, fig. 12 is a schematic circuit diagram of the communication unit according to the specific embodiment of the present invention, and a specific working principle of the communication unit is shown in fig. 12; the ethernet unit 1066 includes an ethernet card for network connection, fig. 13 is a schematic circuit diagram of the ethernet unit according to the embodiment of the present invention, and the working principle of the ethernet unit is shown in fig. 13; the power supply unit 1067 includes a chopper for dc conversion of input voltage, and it should be further described that different power supply requirements exist in a hardware platform, and the power supply unit may adopt different choppers to implement corresponding voltage changes according to actual requirements, where fig. 14 is a schematic circuit diagram of the power supply unit according to a specific embodiment of the present invention, and a connection manner of the power supply unit is as shown in fig. 14. Fig. 14 is a schematic circuit diagram for implementing dc conversion from 12V to 5V, and when dc conversion with different requirements is involved, a person skilled in the art can know how to set a chopper circuit.
In the input unit, a first ISP input port and a second ISP input port respectively support image information input of 4K pixels at maximum; and image information input through the first ISP input port and the second ISP input port is processed by the ISP to obtain YUV data, and then the YUV data is sent to a splicing unit in the control assembly so as to correct and splice the YUV data corresponding to the first image and the second image, thereby obtaining image data within a complete depth of field range. And sending the image data to an image coding engine, and sending the spliced H.264 or H.265 image to a client for display or storage through an IP transmission protocol.
It should be further noted that the circuit schematic diagram shown in any one of fig. 4 to fig. 14 is only used as a circuit connection manner for implementing the operation of the corresponding unit or module in the long focus monitoring device in the embodiment, and the invention is not limited to the specific circuit connection manner. On the basis of the circuit schematic diagram shown in any one of fig. 4 to fig. 14, a person skilled in the art can clearly know the connection manner of the corresponding units or modules in this embodiment, and therefore details of the circuit or the operation principle thereof are not repeated.
After splicing processing, the first image and the second image can be spliced to obtain a picture with the depth of field ranging from 15 meters to 180 meters, the image of people or objects in the range is clear, and the proportion of the picture is 3: 4 (length: width), pixel size 1200 ten thousand pixels (resolution 3000 x 4000), pixel element 2.4 μm.
Example 2
In this embodiment, a long focus monitoring method is provided, which applies the long focus monitoring apparatus in embodiment 1 above; fig. 15 is a flowchart of long focus monitoring according to an embodiment of the present invention, as shown in fig. 15, the flowchart includes the following steps:
step S202, acquiring a first image shot by a first camera and a second image shot by a second camera; the first image is located in a first depth-of-field range, the second image is located in a second depth-of-field range, the maximum value of the first depth-of-field range is larger than the maximum value of the second depth-of-field range, and the minimum value of the first depth-of-field range is larger than the minimum value of the second depth-of-field range;
and step S204, splicing the first image and the second image.
By the method in the embodiment, images in different depth of field ranges can be acquired by the first camera device and the second camera device respectively, so that the problem of insufficient depth of field of the image acquired in a monitoring scene by adopting a long focal length in the related art can be solved, and the effect of increasing the depth of field of the image in a long focal length application scene is achieved.
Alternatively, the main body for executing the above steps may be a control component, such as a CPU, etc., but is not limited thereto.
In an alternative embodiment, in the step S202, the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range.
It should be further noted that the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range, and is specifically used for indicating that the first depth of field range and the second depth of field range are connected with each other, that is, no redundant gap or overlapping area exists between the first depth of field range and the second depth of field range. By the technical scheme, the first image and the second image are not overlapped with each other and do not have fuzzy parts, so that the definition of a new image is obtained by subsequent splicing of the first image and the second image, and the processing complexity of the control assembly is reduced.
In an alternative embodiment, the acquiring a first image captured by a first image capturing apparatus and a second image captured by a second image capturing apparatus in step S202 includes:
acquiring a first image shot by a first camera shooting device at a first moment, and acquiring a second image shot by a second camera shooting device at a second moment; the first time and the second time are located at the same time.
Through the technical scheme, the first image pickup device and the second image pickup device can carry out exposure shooting at the same time, so that the first image and the second image can be kept uniform in parameters such as brightness and contrast. By means of the method, the first camera device and the second camera device can perform exposure shooting at the same time, and specifically, the control module can simultaneously send the control instruction to the first camera device and the second camera device, or the first camera device and the second camera device can simultaneously work when preset conditions (such as a certain time point) occur.
In an alternative embodiment, stitching the first image with the second image comprises:
ISP processing is carried out on the first image to obtain first color coding YUV data;
ISP processing is carried out on the second image to obtain second color coding YUV data;
and splicing the first color coded YUV data and the second color coded YUV data.
It should be further noted that, before the first color-coded YUV data and the second color-coded YUV data are subjected to the stitching process, the first color-coded YUV data and the second color-coded YUV data may also be corrected, for example, a distortion point in an image is obvious, or a dark place is correspondingly processed.
In an alternative embodiment, in step S202, the first image capturing apparatus is located above the second image capturing apparatus, and the upper field of view of the first image capturing apparatus is in a horizontal direction.
It should be further noted that, the relative position setting of the first image capturing apparatus and the second image capturing apparatus can enable the first image and the second image acquired by the first image capturing apparatus and the second image capturing apparatus to be seamlessly spliced in the splicing process.
In an alternative embodiment, in step S202, the first image capturing apparatus includes: first camera lens and first photoelectric sensor, second camera equipment includes: a second camera lens and a second photoelectric sensor;
the first camera lens is a long-focus lens, and the second camera lens is a short-focus lens; the first photoelectric sensor and the second photoelectric sensor are electrically connected with the control component.
It should be further noted that, the first image capturing apparatus and the second image capturing apparatus are configured to enable the first image capturing apparatus and the second image capturing apparatus to achieve simultaneous exposure in acquiring the first image and the second image, so that traces of the first image and the second image in a stitching process are minimized, and an imaging effect is further improved.
In an alternative embodiment, the acquiring a first image captured by a first image capturing apparatus and a second image captured by a second image capturing apparatus in step S202 includes:
the acquisition of the first image and the second image is realized through the first image signal processing ISP input port and the second image signal processing ISP input port, specifically, a first photosensor in the first image pickup apparatus is electrically connected to the first ISP input port, and a second photosensor in the second image pickup apparatus is electrically connected to the second ISP input port.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 3
In this embodiment, a long focus monitoring apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 16 is a block diagram of a long focus monitoring apparatus according to an embodiment of the present invention, as shown in fig. 16, the apparatus including:
an obtaining module 302, configured to obtain a first image captured by a first image capturing apparatus and a second image captured by a second image capturing apparatus; the first image is located in a first depth-of-field range, the second image is located in a second depth-of-field range, the maximum value of the first depth-of-field range is larger than the maximum value of the second depth-of-field range, and the minimum value of the first depth-of-field range is larger than the minimum value of the second depth-of-field range;
and a stitching module 304, configured to stitch the first image and the second image.
By the device in the embodiment, images in different depth of field ranges can be acquired by the first camera device and the second camera device respectively, so that the problem of insufficient depth of field of a picture acquired in a monitoring scene by adopting a long focal length in the related art can be solved, and the effect of increasing the depth of field of the picture in a long focal length application scene is achieved.
In an alternative embodiment, in the obtaining module 302, the minimum value of the first depth-of-field range is equal to the maximum value of the second depth-of-field range.
It should be further noted that the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range, and is specifically used for indicating that the first depth of field range and the second depth of field range are connected with each other, that is, no redundant gap or overlapping area exists between the first depth of field range and the second depth of field range. By the technical scheme, the first image and the second image are not overlapped with each other and do not have fuzzy parts, so that the definition of a new image is obtained by subsequent splicing of the first image and the second image, and the complexity of processing the new image by the control assembly is reduced.
In an optional embodiment, the acquiring module 302 acquires a first image captured by a first image capturing device and a second image captured by a second image capturing device, and includes:
acquiring a first image shot by a first camera shooting device at a first moment, and acquiring a second image shot by a second camera shooting device at a second moment; the first time and the second time are located at the same time.
Through the technical scheme, the first image pickup device and the second image pickup device can carry out exposure shooting at the same time, so that the first image and the second image can be kept uniform in parameters such as brightness and contrast. The above-mentioned make first camera equipment and second camera equipment expose and shoot at the same moment, it is specifically can be through control assembly phase its control command that sends simultaneously, or make first camera equipment and second camera equipment carry out work simultaneously when presetting the condition (such as a certain time point).
In an optional embodiment, the stitching module 304 stitches the first image and the second image, including:
ISP processing is carried out on the first image to obtain first color coding YUV data;
ISP processing is carried out on the second image to obtain second color coding YUV data;
and splicing the first color coded YUV data and the second color coded YUV data.
In an optional embodiment, in the above-mentioned obtaining module 302, the first camera device is located above the second camera device, and an upper view of the first camera device is in a horizontal direction.
It should be further noted that, the relative position setting of the first image capturing apparatus and the second image capturing apparatus can enable the first image and the second image acquired by the first image capturing apparatus and the second image capturing apparatus to be seamlessly spliced in the splicing process.
In an optional embodiment, in the obtaining module 302, the first image capturing apparatus includes: first camera lens and first photoelectric sensor, second camera equipment includes: a second camera lens and a second photoelectric sensor;
the first camera lens is a long-focus lens, and the second camera lens is a short-focus lens; the first photoelectric sensor and the second photoelectric sensor are electrically connected with the control component.
It should be further noted that, the first image capturing apparatus and the second image capturing apparatus are configured to enable the first image capturing apparatus and the second image capturing apparatus to achieve simultaneous exposure in acquiring the first image and the second image, so that traces of the first image and the second image in a stitching process are minimized, and an imaging effect is further improved.
In an optional embodiment, the acquiring module 302 acquires a first image captured by a first image capturing device and a second image captured by a second image capturing device, and includes:
the acquisition of the first image and the second image is realized through the first image signal processing ISP input port and the second image signal processing ISP input port, specifically, a first photosensor in the first image pickup apparatus is electrically connected to the first ISP input port, and a second photosensor in the second image pickup apparatus is electrically connected to the second ISP input port.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 4
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a first image shot by the first image pickup device and a second image shot by the second image pickup device; the first image is located in a first depth-of-field range, the second image is located in a second depth-of-field range, the maximum value of the first depth-of-field range is larger than the maximum value of the second depth-of-field range, and the minimum value of the first depth-of-field range is larger than the minimum value of the second depth-of-field range;
and S2, splicing the first image and the second image.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Example 5
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a first image shot by the first image pickup device and a second image shot by the second image pickup device; the first image is located in a first depth-of-field range, the second image is located in a second depth-of-field range, the maximum value of the first depth-of-field range is larger than the maximum value of the second depth-of-field range, and the minimum value of the first depth-of-field range is larger than the minimum value of the second depth-of-field range;
and S2, splicing the first image and the second image.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (16)

1. A monitoring device, comprising:
the first camera equipment is used for shooting a first image in a first depth range;
the second camera shooting equipment is used for shooting a second image in a second depth-of-field range;
the control assembly is used for acquiring the first image and the second image, and the first camera shooting device and the second camera shooting device are electrically connected with the control assembly;
wherein the maximum value of the first depth of field range is greater than the maximum value of the second depth of field range, and the minimum value of the first depth of field range is greater than the minimum value of the second depth of field range.
2. The apparatus according to claim 1, wherein the shooting orientation of the first camera and the shooting orientation of the second camera are at a preset included angle, and the preset included angle is an acute angle.
3. The apparatus of claim 1, wherein a minimum value of the first depth of field range is equal to a maximum value of the second depth of field range.
4. The apparatus according to claim 1, wherein the first image pickup apparatus is located above the second image pickup apparatus, and an upper field of view of the first image pickup apparatus is in a horizontal direction.
5. The apparatus of claim 1, wherein the first and second imaging apparatuses are each electrically connected to the control assembly, comprising:
the first camera device and the second camera device are respectively and independently electrically connected with the control component.
6. The apparatus of any of claims 1 to 5, wherein the control assembly is further configured to:
and splicing the acquired first image and the second image.
7. The apparatus of any of claims 1 to 5, wherein the control assembly is further configured to:
and instructing the first image pickup device to take the first image at a first moment, and instructing the second image pickup device to take the second image at a second moment, wherein the first moment and the second moment are at the same moment.
8. The apparatus according to any one of claims 1 to 5, wherein the first image pickup apparatus includes: a first imaging lens and a first photosensor, the second imaging apparatus including: a second camera lens and a second photoelectric sensor;
the first camera lens is a long-focus lens, and the second camera lens is a short-focus lens; the first photoelectric sensor and the second photoelectric sensor are electrically connected with the control component.
9. The apparatus of claim 8, wherein the control assembly comprises: the device comprises a processing unit and an input unit, wherein the processing unit is electrically connected with the input unit;
the input unit comprises a first image signal processing ISP input port and a second image signal processing ISP input port, the first photoelectric sensor is electrically connected to the first ISP input port, and the second photoelectric sensor is electrically connected to the second ISP input port;
the first photoelectric sensor is arranged to transmit the first image to the first ISP input port, and the first ISP input port is used for performing ISP processing on the first image to obtain first color-coded YUV data;
the second photoelectric sensor is arranged to transmit the second image to the second ISP input port, and the second ISP input port is used to perform ISP processing on the second image to obtain second color-coded YUV data.
10. A monitoring method, applied to a monitoring apparatus according to any one of claims 1 to 8, the method comprising:
acquiring the first image shot by the first camera shooting device and the second image shot by the second camera shooting device; wherein the first image is located within the first depth range, the second image is located within the second depth range, a maximum value of the first depth range is greater than a maximum value of the second depth range, and a minimum value of the first depth range is greater than a minimum value of the second depth range;
and splicing the first image and the second image.
11. The method of claim 10, wherein the minimum value of the first depth of field range is equal to the maximum value of the second depth of field range.
12. The method of claim 10, wherein the acquiring the first image captured by the first image capture device and the second image captured by the second image capture device comprises:
acquiring the first image shot by the first camera shooting device at a first moment, and acquiring the second image shot by the second camera shooting device at a second moment; wherein the first time and the second time are at the same time.
13. The method of claim 10, wherein said stitching the first image with the second image comprises:
performing ISP processing on the first image to obtain first color coded YUV data;
ISP processing is carried out on the second image to obtain second color coding YUV data;
and splicing the first color coded YUV data and the second color coded YUV data.
14. A monitoring apparatus, applied to a monitoring device according to any one of claims 1 to 8, the apparatus comprising:
an acquisition module configured to acquire the first image captured by the first image capturing apparatus and the second image captured by the second image capturing apparatus; wherein the first image is located within the first depth range, the second image is located within the second depth range, a maximum value of the first depth range is greater than a maximum value of the second depth range, and a minimum value of the first depth range is greater than a minimum value of the second depth range;
and the splicing module is used for splicing the first image and the second image.
15. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 10 to 13 when executed.
16. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 10 to 13.
CN201811566026.7A 2018-12-20 2018-12-20 Monitoring equipment, method and device, storage medium and electronic device Pending CN111355943A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811566026.7A CN111355943A (en) 2018-12-20 2018-12-20 Monitoring equipment, method and device, storage medium and electronic device
PCT/CN2019/112424 WO2020125185A1 (en) 2018-12-20 2019-10-22 Monitoring device, method and apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811566026.7A CN111355943A (en) 2018-12-20 2018-12-20 Monitoring equipment, method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN111355943A true CN111355943A (en) 2020-06-30

Family

ID=71102043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811566026.7A Pending CN111355943A (en) 2018-12-20 2018-12-20 Monitoring equipment, method and device, storage medium and electronic device

Country Status (2)

Country Link
CN (1) CN111355943A (en)
WO (1) WO2020125185A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462723A (en) * 2020-12-07 2021-03-09 北京达美盛软件股份有限公司 System for real-time control and visualization of digital factory under augmented reality environment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780315A (en) * 2015-04-08 2015-07-15 广东欧珀移动通信有限公司 Shooting method and system for camera shooting device
CN105519103A (en) * 2015-10-22 2016-04-20 深圳市锐明技术股份有限公司 Vehicle monitoring method and apparatus
CN105744239A (en) * 2016-05-11 2016-07-06 湖南源信光电科技有限公司 Multi-focal-length lens ultrahigh resolution linkage imaging device
WO2018153254A1 (en) * 2017-02-27 2018-08-30 杭州海康威视数字技术股份有限公司 Multi-view camera device and monitoring system
CN209402651U (en) * 2018-12-20 2019-09-17 深圳光启空间技术有限公司 A kind of monitoring device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006054503A (en) * 2004-08-09 2006-02-23 Olympus Corp Image generation method and apparatus
CN105933678B (en) * 2016-07-01 2019-01-15 湖南源信光电科技有限公司 More focal length lens linkage imaging device based on Multiobjective Intelligent tracking
CN106791337B (en) * 2017-02-22 2023-05-12 北京汉邦高科数字技术股份有限公司 Zoom camera with double-lens optical multiple expansion and working method thereof
CN206559476U (en) * 2017-02-22 2017-10-13 北京汉邦高科数字技术股份有限公司 The Zoom camera that a kind of twin-lens optical multiplier is expanded
CN109120883B (en) * 2017-06-22 2020-11-27 杭州海康威视数字技术股份有限公司 Far and near scene-based video monitoring method and device and computer-readable storage medium
CN108174180B (en) * 2018-01-02 2019-07-30 京东方科技集团股份有限公司 A kind of display device, display system and 3 D displaying method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104780315A (en) * 2015-04-08 2015-07-15 广东欧珀移动通信有限公司 Shooting method and system for camera shooting device
CN105519103A (en) * 2015-10-22 2016-04-20 深圳市锐明技术股份有限公司 Vehicle monitoring method and apparatus
CN105744239A (en) * 2016-05-11 2016-07-06 湖南源信光电科技有限公司 Multi-focal-length lens ultrahigh resolution linkage imaging device
WO2018153254A1 (en) * 2017-02-27 2018-08-30 杭州海康威视数字技术股份有限公司 Multi-view camera device and monitoring system
CN209402651U (en) * 2018-12-20 2019-09-17 深圳光启空间技术有限公司 A kind of monitoring device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112462723A (en) * 2020-12-07 2021-03-09 北京达美盛软件股份有限公司 System for real-time control and visualization of digital factory under augmented reality environment
CN112462723B (en) * 2020-12-07 2021-12-24 北京达美盛软件股份有限公司 System for real-time control and visualization of digital factory under augmented reality environment

Also Published As

Publication number Publication date
WO2020125185A1 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
EP3624439B1 (en) Imaging processing method for camera module in night scene, electronic device and storage medium
DE112015006383B4 (en) Distance image detection device and distance image detection method
CN110035228B (en) Camera anti-shake system, camera anti-shake method, electronic device, and computer-readable storage medium
US10410061B2 (en) Image capturing apparatus and method of operating the same
JP5843454B2 (en) Image processing apparatus, image processing method, and program
CN104052931A (en) Image shooting device, method and terminal
KR20160072687A (en) Camera Module
CN109618102B (en) Focusing processing method and device, electronic equipment and storage medium
CN110049236A (en) Camera Anti-shaking circuit, mobile terminal, assemble method
US10110303B2 (en) Light-communication sending methods and apparatus, light-communication receiving methods and apparatus, and light communication systems
KR102698647B1 (en) Apparatus and method for generating a moving image data including multiple sections image of the electronic device
CN106506939B (en) Image acquisition device and acquisition method
US10462346B2 (en) Control apparatus, control method, and recording medium
CN209402651U (en) A kind of monitoring device
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device
CN108513069A (en) Image processing method, device, storage medium and electronic equipment
EP3675477B1 (en) Electronic device for providing function by using rgb image and ir image acquired through one image sensor
CN105472263B (en) Image acquisition method and the image capture equipment for using the method
CN110933297B (en) Photographing control method and device of intelligent photographing system, storage medium and system
US20130021442A1 (en) Electronic camera
CN104954694A (en) Industrial camera capable of viewing panoramic image in real time through WIFI (wireless fidelity)
CN111355943A (en) Monitoring equipment, method and device, storage medium and electronic device
CN110266967A (en) Image processing method, device, storage medium and electronic equipment
JP2014103643A (en) Imaging device and subject recognition method
CN105472226A (en) Front and rear two-shot panorama sport camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201202

Address after: 518057 2 / F, software building, No.9, Gaoxin Middle Road, Nanshan District, Shenzhen, Guangdong Province

Applicant after: SHENZHEN KUANG-CHI SUPER MATERIAL TECHNOLOGY Co.,Ltd.

Address before: Bantian street Longgang District of Shenzhen City, Guangdong province 518300 Jihua Road, the new world Huasai Industrial Area No. 2, building 101 (two floor) 201

Applicant before: SHENZHEN KUANG-CHI SPACE TECH. Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination