WO2023115685A1 - 一种基于全向摄像头的监控方法及其系统 - Google Patents

一种基于全向摄像头的监控方法及其系统 Download PDF

Info

Publication number
WO2023115685A1
WO2023115685A1 PCT/CN2022/076339 CN2022076339W WO2023115685A1 WO 2023115685 A1 WO2023115685 A1 WO 2023115685A1 CN 2022076339 W CN2022076339 W CN 2022076339W WO 2023115685 A1 WO2023115685 A1 WO 2023115685A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
camera
video image
pixel
target
Prior art date
Application number
PCT/CN2022/076339
Other languages
English (en)
French (fr)
Inventor
张世渡
Original Assignee
威艾特科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 威艾特科技(深圳)有限公司 filed Critical 威艾特科技(深圳)有限公司
Publication of WO2023115685A1 publication Critical patent/WO2023115685A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present application relates to the technical field of video communication, in particular to a monitoring method and system based on an omnidirectional camera.
  • a camera (CAMERA or WEBCAM) is a video input device, a type of closed-circuit television, and is widely used in video conferencing, telemedicine and real-time monitoring.
  • the camera generally has basic functions such as video photography, broadcasting and static image capture.
  • the signal is then input to the computer through the parallel port and USB connection, and then the software restores the image to form a picture.
  • the inventor believes that it is difficult for the cameras in the related technologies to control the camera to quickly turn to the target of interest according to the requirements of the video client.
  • the present application provides a monitoring method and system based on an omnidirectional camera.
  • a monitoring method based on an omnidirectional camera comprising:
  • the camera group realizes all-round coverage of the environment, and the video processor cuts, synthesizes, and digitally zooms the video images acquired by the camera group according to the viewing requirements of each video client, simulating a virtual camera, Output the target video image with any orientation and scaling required by the video client.
  • control instruction includes a virtual shooting range
  • the specific steps of extracting and processing the video image from the shared buffer area based on the control instruction to obtain the target video image include:
  • the virtual shooting range is obtained, and then the input pixels are extracted from the shared buffer area based on the virtual shooting range, and the input pixels are processed, so as to obtain the target of any orientation and scaling required by the video client video image.
  • the specific step of extracting input pixels from the shared buffer area based on the virtual shooting range includes:
  • the second target camera does not include the virtual shooting range for the preset shooting range, but has an intersection with the virtual shooting range camera;
  • Input pixels are extracted from the new video image based on the virtual capture range.
  • the virtual shooting range falls into any preset shooting range, the input pixels can be directly extracted from the shared buffer area corresponding to the camera, without the need for cropping, compositing and other operations on the video image, which is helpful Save operation time; the virtual shooting range requires the preset shooting ranges of multiple cameras to be fully covered, and the video images need to be cropped and synthesized, so that the generated new video images are complete and continuous.
  • the specific steps of cropping the overlapping video images in all the second target camera video images include:
  • the reference object relative to the adjacent two second target camera observation deflection angle is the same and the smallest;
  • the two input video images are cropped respectively according to the overlapping range.
  • control instruction includes an output resolution
  • the specific steps of processing the input pixels and generating an output video image after processing include:
  • An output video image is generated based on the output pixels.
  • control instruction includes a shielding range
  • the specific steps of processing the input pixels and generating an output video image after processing include:
  • An output video image is generated based on the pixel value adjusted input pixels.
  • shielding certain privacy areas and avoiding object images in this area from appearing in the output video helps to obtain the target video image required by the video client.
  • the specific steps of processing the input pixels and generating an output video image after processing include:
  • An output video image is generated based on the pixel value adjusted input pixels.
  • shielding certain privacy areas and avoiding object images in this area from appearing in the output video helps to obtain the target video image required by the video client.
  • a surveillance system based on an omnidirectional camera including:
  • a camera device and a video processing device the camera device is connected to the video processing device, and the video processing device is connected to a video client device;
  • the camera device is used to acquire video images, and transmit the video images to the video processing device;
  • the video processing device is configured to receive the video image and the control instruction sent by the video client device, and generate a target video image after processing the video image according to the control instruction, and send the video image to the video client device output the target video image;
  • the video client device is configured to send the control instruction to the video processing device and receive the target video image.
  • the video image is cut, synthesized and digitally zoomed, and then the target video image with any orientation and zoom ratio is transmitted to the video client.
  • an integrated audio unit, power supply unit and controllable mobile unit are also included.
  • the video stream sent by the remote user is received and displayed, and the audio stream is sent to the remote user, thus becoming an omnidirectional video conferencing terminal with stronger functions.
  • a machine-readable storage medium the storage medium is used for storing machine-executable program code, and the machine-executable program code is executed to execute the method according to any one of claims 1-7.
  • the present application includes at least one of the following beneficial technical effects:
  • the camera group realizes all-round coverage of the environment.
  • the video processor cuts, synthesizes, and digitally zooms the video images acquired by the camera group according to the viewing requirements of each video client, simulates a virtual camera, and outputs any video image required by the video client.
  • Fig. 1 is the main flowchart of a kind of monitoring method based on omnidirectional camera in the embodiment of the present application;
  • FIG. 2 is an overall schematic diagram of video image processing in a monitoring method based on an omnidirectional camera according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of cropping and stitching adjacent camera images in a monitoring method based on an omnidirectional camera according to an embodiment of the present application;
  • FIG. 4 is a schematic diagram 1 of pixel shielding in a monitoring method based on an omnidirectional camera according to an embodiment of the present application;
  • FIG. 5 is a schematic diagram 2 of pixel shielding in a monitoring method based on an omnidirectional camera according to an embodiment of the present application;
  • FIG. 6 is a schematic diagram of pixel distance calculation in a monitoring method based on an omnidirectional camera according to an embodiment of the present application
  • FIG. 7 is an overall structural diagram of a remote agent robot in a monitoring system based on an omnidirectional camera according to an embodiment of the present application.
  • FIG. 8 is an overall structural diagram of a monitoring system based on an omnidirectional camera according to an embodiment of the present application.
  • the embodiment of the present application discloses a monitoring method based on an omnidirectional camera.
  • a monitoring method based on an omnidirectional camera includes steps S1000-S5000:
  • Step S1000 Capture video images acquired by all cameras in the camera group.
  • Step S2000 Store all video images in the shared buffer respectively.
  • the video image is stored in the shared buffer area in the form of its pixel matrix, each pixel includes the brightness and color of the pixel, the shared buffer area includes multiple storage units, and the video image corresponding to each camera is stored in different in the storage unit.
  • the video processor of each video client can access the shared buffer area.
  • Step S3000 Receive the control instruction sent by the video client.
  • the video processor receives control instructions from the video client, and the control instructions include virtual shooting range, shielding range, and output resolution.
  • Step S4000 Based on the control instruction, the video image is extracted from the shared buffer area and processed to obtain the target video image.
  • Processing video images includes operations such as screen dragging, rotation, cropping, flattening, digital zooming, and shielding of video images.
  • Step S5000 Send the target video image to the video client.
  • the target video image is the final video data received by the video client, that is, video data of any orientation angle and scaling ratio required by the video client.
  • step S4000 includes steps S4100-S4300:
  • step S4100 extracting input pixels from the corresponding shared buffer area based on the virtual shooting range.
  • the virtual shooting range is the shooting coverage of the virtual camera, where the virtual camera is defined as: the camera that shoots the target video image, the camera is virtual according to the client's needs, but not real; the input pixels are obtained by the camera group pixels of the video image.
  • step S4200 extract input pixels from the shared buffer area based on the virtual shooting range.
  • the processing of the video image is realized by processing its pixels, and the processing of the input pixels includes operations such as cropping, stitching, and digital scaling.
  • Step S4300 Compress and code the output video image to obtain the target video image.
  • a video compression encoding format accepted by the video client such as mjpg or H.264, etc. is generated; specifically, in other embodiments not recorded in this embodiment, it may also be performed by an external processor.
  • Video images are compressed and encoded.
  • step S4100 includes steps S4110-S4140:
  • Step S4110 Obtain the preset shooting range of each camera in the camera group.
  • each camera in the camera group is fixed, so its preset shooting range is also fixed.
  • Step S4120 Compare the preset shooting range with the virtual shooting range.
  • Step S4130 Determine whether the virtual shooting range completely falls into any preset shooting range.
  • the virtual shooting range completely falls into the preset shooting range of any camera, that is, the virtual shooting range only appears in the preset shooting range of a single camera; the virtual shooting range does not completely fall into the preset shooting range of any camera , that is, the virtual shooting range needs the preset shooting ranges of multiple cameras to be fully covered.
  • Step S4140 If yes, extract input pixels from the shared buffer area corresponding to the first target camera, where the first target camera is a camera whose preset shooting range includes a virtual shooting range.
  • the second target camera is a camera whose preset shooting range does not include the virtual shooting range, but intersects with the virtual shooting range; crop all the second targets Overlapping video images in camera video images; stitching the remaining video images to form a new video image; extracting input pixels from the new video image based on the virtual shooting range.
  • all video images are stored in a shared buffer area
  • the shared buffer area corresponding to the first target camera refers to: a storage unit that stores video images acquired by the first target camera;
  • the shared buffer area corresponding to the camera refers to a storage unit for storing video images acquired by the second target camera.
  • Step S4141 Identify the reference objects in the input video images of the two adjacent second target cameras, where the reference objects have the same and smallest observation deflection angles relative to the two adjacent second target cameras.
  • Step S4142 Calculate the overlapping range of the two input video images based on the reference object.
  • Step S4143 Crop the two input video images respectively according to the overlapping range.
  • the camera group supports horizontal 360° and vertical 180° video coverage.
  • the video synthesis in the horizontal direction 6 cameras with a viewing angle of 120° are used, which are divided into odd-numbered groups and even-numbered groups. Numbered groups, two sets of cameras form 360° cross coverage.
  • the video processor can directly store in the shared buffer area corresponding to No. 3 camera Extract the input pixels from .
  • the overlapping video images acquired by No. 3 and No. 4 cameras are cropped, assuming that the number of overlapping pixels of the two video images is L , then the overlapping parts of the two video images are cut off by complementary 1/2 L pixels; such as cutting off 1/2 L pixels to the left of the overlapping part in the first video image, and cutting off the second video One-half L pixels on the right side of the overlapping part in the image; then stitch the remaining video images to form a new video image, and then extract the input pixels of the new video image from the shared buffer area.
  • step S4141 ensure that the objects specified by the user overlap completely, and perform clipping and stitching according to the degree of overlap. In this way, the focus object is more complete and continuous.
  • step S4200 include steps S4210a-S4270a:
  • Step S4210a Obtain the preset resolution of the camera.
  • all cameras in the camera group are of the same model.
  • Step S4220a Calculate the ratio of the preset resolution to the output resolution.
  • the output resolution is the resolution of the output image, and the control instruction includes the output resolution, so the output resolution can be obtained directly, and the ratio in this embodiment is n.
  • Step S4230a Select a plurality of reference pixels from all input pixels according to the ratio.
  • the number of reference pixels depends on the size of the input pixels and the size of the ratio n.
  • Step S4240a Obtain the position of the output pixel based on the position of the reference pixel and the ratio.
  • the ratio is n, and its output pixel position is (n*x, n*y).
  • Step S4250a Select surrounding pixels around the reference pixel based on the ratio.
  • Step S4260a Obtain the basic pixel information value of the reference pixel and surrounding pixels corresponding to the reference pixel, and use all basic pixel information values as the basic pixel information value of the output pixel.
  • Basic pixel information values include the color and brightness of a pixel.
  • Step S4270a Generate an output video image based on the output pixels.
  • step 4S4200 include steps S4210b-S4270b.
  • Step S4210b Based on the preset shooting range of the camera corresponding to the input pixel, obtain the start angle and end angle of the masked range.
  • the masking range is the pixel range of the masking area, where the masking area is defined as: the area to be masked in the target video image.
  • Step S4220b Obtain the line-of-sight angle of the observation pixel of the camera corresponding to the input pixel.
  • the pixel point is the position of the pixel.
  • the center point of the camera is used as the origin to establish a plane coordinate system, and the center point of the camera is used as a tangent to the shielded area, and the two angles with the largest angle difference between the abscissa and the abscissa are the starting angles of the shielded range and the end angle of the masked pixel; the angle between the other abscissas is the start angle s of the masked range and the end angle e of the masked pixel; the angle formed by connecting the camera center point to the pixel point and the abscissa is The viewing angle f of the pixel observed by the camera.
  • Step S4240b Compare each view angle with the start angle and the end angle respectively.
  • Step S4250b Determine whether the line-of-sight angle is between the start angle and the end angle.
  • Step S4260b If the line of sight angle is between the start angle and the end angle, set the pixel value of the corresponding input pixel as the preset masked pixel value; if the line of sight angle is outside the start angle and the end angle, do not adjust The pixel value of the corresponding input pixel.
  • Step S4270b Generate an output video image based on the input pixels whose pixel values have been adjusted.
  • step S4200 include steps S4210c-S4260c:
  • Step S4210c Obtain the actual distance between any two adjacent cameras.
  • At least two cameras are included, and the actual distance is the distance from the center point to the center point of the two cameras.
  • Step S4220c Obtain a first line-of-sight angle and a second line-of-sight angle at which any two adjacent cameras observe objects corresponding to all input pixels.
  • the size of the first line of sight angle and the second line of sight angle is equal to the angle of the first line of sight angle and the second line of sight angle at which the virtual camera observes all input pixels in the video image, so it is possible to pass the input pixel to the output video image
  • the edge distance is calculated.
  • Step S4240c Based on the actual distance, the first line of sight angle and the second line of sight angle, respectively obtain the relative distances from the objects corresponding to all the input pixels to the camera group.
  • a is the first line-of-sight angle
  • b is the second line-of-sight angle
  • d is the actual distance between any two cameras.
  • the relative distance is replaced by the distance from the object corresponding to the input pixel to the overall center point of the camera group.
  • Step S4240c Determine whether the relative distance is smaller than a preset threshold.
  • the preset threshold S is equal to the distance from the point on the threshold equation to the camera, and the range of the equation is determined according to the control instruction sent by the video client to simulate the virtual partition.
  • Step S4250c If the relative distance is greater than or equal to the threshold, set the pixel value of the corresponding output pixel as a masked pixel value; if the relative distance is smaller than the threshold, then do not adjust the pixel value of the corresponding output pixel.
  • D ⁇ S that is, the pixel on or behind the virtual partition
  • Step S4260c Generate an output video image based on the output pixels after pixel value adjustment.
  • the masking can also be performed on the output image, and the object can also be restored to a line surface before performing masking calculation.
  • the implementation principle of a monitoring method based on omnidirectional cameras in the embodiment of the present application is as follows: the video processor captures the video images obtained by each camera in the camera group and stores them in the shared buffer area, and based on the control instructions sent by the video client The image is processed by cropping, stitching, digital zooming and masking, etc., and after encoding and compression, the target video image with any orientation and scaling ratio required by the video client is output.
  • the present application Based on a monitoring method based on an omnidirectional camera, the present application also discloses a monitoring system based on an omnidirectional camera.
  • a monitoring system based on an omnidirectional camera includes:
  • a camera device and a video processing device the camera device is connected to the video processing device, and the video processing device is connected to a video client device;
  • the camera device is used to acquire video images and transmit the video images to the video processing device;
  • the video processing device is used to receive the video image and the control instruction sent by the video client device, and generate a target video image after processing the video image according to the control command, and output the target video image to the video client device;
  • the video client device is used for sending control instructions to the video processing device and receiving target video images.
  • the camera device includes a camera group
  • the video processing device includes a video processor
  • the video client device includes a video client
  • the camera device and the video processing device can be electrically or communicatively connected, and the video processing device and the video client
  • the devices may be electrically or communicatively connected.
  • the operation content of processing the video image includes cropping, stitching, digital zooming, masking, etc. of the video image.
  • the video processor includes a video image capture module, an image synthesis module and a video compression coding module.
  • the video image capture module captures high-definition video images
  • the image synthesis module synthesizes the output video images
  • the video compression encoding module compresses and encodes the video images, and then generates video compression encoding formats accepted by the video client, such as mjpg or H.264.
  • a monitoring system based on an omnidirectional camera also includes an integrated audio device, a power supply device, and a controllable mobile device, thereby forming a remote agent robot.
  • the audio device includes a microphone, a loudspeaker, a display screen, a wireless network interface (WIFI or mobile data network);
  • the power supply device includes a battery and supporting charging and power supply circuit modules;
  • the controllable mobile device includes a controllable mobile base, wheels , driving motor, lifting pillar motor, etc., a main control CPU and supporting driving circuit are used to control the omnidirectional camera, audio device and controllable mobile device.
  • the software run by the main control CPU receives the control instructions sent by the user through the network, controls the drive motor according to the control instructions, and drives the remote agent to move.
  • the software receives the video stream sent by the remote user and displays it on the screen.
  • the software also sends the sound and video images of the microphone and camera group to the remote user.
  • the present application Based on a monitoring method based on an omnidirectional camera, the present application also discloses a monitoring system based on an omnidirectional camera.
  • the implementation principle of a monitoring system based on an omnidirectional camera in the embodiment of the present application is as follows: according to the needs of the video client, the video image is cut, synthesized, digitally zoomed, and shielded, and then any orientation and zoom ratio are transmitted to the video client. target video image.
  • the present application Based on a monitoring method based on an omnidirectional camera, the present application also discloses a machine-readable storage medium.
  • a machine-readable storage medium is used for storing machine-executable program codes, and the machine-executable program codes are run to execute a monitoring method based on an omnidirectional camera.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请涉及计算机技术领域,尤其涉及一种基于全向摄像头的监控方法及其系统;其方法包括:捕获摄像头组中所有摄像头所获取的视频图像;将所有视频图像分别存储于共享缓存区中;接收视频客户端发送的控制指令;基于控制指令从共享缓存区中提取视频图像并进行处理,得到目标视频图像;发送目标视频图像至视频客户端。视频处理器根据每个视频客户端的观看需求,对摄像头组获取的视频图像进行裁剪、合成以及数字缩放等操作,模拟一个虚拟摄像头,输出视频客户端需要的任意朝向和缩放比例的目标视频图像。

Description

一种基于全向摄像头的监控方法及其系统 技术领域
本申请涉及视频通信技术领域,尤其涉及一种基于全向摄像头的监控方法及其系统。
背景技术
摄像头(CAMERA或WEBCAM)是一种视频输入设备,属闭路电视的一种,被广泛的运用于视频会议,远程医疗及实时监控等方面。
相关技术中,摄像头一般具有视频摄影、传播和静态图像捕捉等基本功能,是借由镜头采集图像后,由摄像头内的感光组件电路及控制组件对图像进行处理并转换成计算机所能识别的数字信号,然后借由并行端口、USB连接,输入到计算机后由软件再进行图像还原,从而形成画面。
针对上述相关技术,发明人认为,相关技术中的摄像头难以根据视频客户端的需求控制摄像头快速转向感兴趣的目标。
发明内容
为了输出根据视频客户端的需要选取目标视频,本申请提供一种基于全向摄像头的监控方法及其系统。
一种基于全向摄像头的监控方法,包括:
捕获摄像头组中所有摄像头所获取的视频图像;
将所有视频图像分别存储于共享缓存区中;
接收视频客户端发送的控制指令;
基于所述控制指令从所述共享缓存区中提取所述视频图像并进行处理,得到目标视频图像;
发送所述目标视频图像至所述视频客户端。
通过采用上述技术方案,摄像头组实现对环境全方位覆盖,视频处理器根据每个视频客户端的观看需求,对摄像头组获取的视频图像进行裁剪、合成以及数字缩放等操作处理,模拟一个虚拟摄像头,输出视频客户端需要的任意朝向和缩放比例的目标视频图像。
可选的,所述控制指令包括虚拟拍摄范围;
所述基于所述控制指令从所述共享缓存区中提取所述视频图像并进行处理,得到目标视频图像的具体步骤包括:
基于所述虚拟拍摄范围从所述共享缓存区中提取输入像素;
处理所述输入像素,处理之后生成输出视频图像;
对所述输出视频图像进行压缩编码,得到所述目标视频图像。
通过采用上述技术方案,根据视频客户端的需求,获取虚拟拍摄范围,再基于虚拟拍摄范围从共享缓存区提取输入像素,对输入像素进行处理,从而得到视频客户端需要的任意朝向和缩放比例的目标视频图像。
可选的,所述基于所述虚拟拍摄范围从所述共享缓存区中提取输入像素的具体步骤包括:
获取所述摄像头组中每个摄像头的预设拍摄范围;
将所述预设拍摄范围与所述虚拟拍摄范围进行比对;
判断所述虚拟拍摄范围是否完全落入任一所述预设拍摄范围中;
若是,从第一目标摄像头所对应的共享缓存区中提取输入像素,所述第一目标摄像头为所述预设拍摄范围包含所述虚拟拍摄范围的摄像头;
若不是,从所有第二目标摄像头所对应的共享缓存区中提取视频图像,所述第二目标摄像头为所述预设拍摄范围不包含所述虚拟拍摄范围,但是与所述虚拟拍摄范围有交集的摄像头;
裁剪所有第二目标摄像头视频图像中的重叠视频图像;
将裁剪剩下的视频图像拼合,形成新视频图像;
基于所述虚拟拍摄范围从所述新视频图像中提取输入像素。
通过采用上述技术方案,虚拟拍摄范围落入任一预设拍摄范围中,则可以直接从该摄像头所对应的共享缓存区中提取输入像素,无需对视频图像进行裁剪、合成等操作,有助于节省操作时间;虚拟拍摄范围需要多个摄像头的预设拍摄范围才能完全覆盖,则需要对视频图像进行裁剪以及合成等操作,从而使得生成的新视频图像完整、连续。
可选的,所述裁剪所有第二目标摄像头视频图像中的重叠视频图像的具体步骤包括:
识别相邻两个第二目标摄像头的输入视频图像中的基准物体,所述基准物体相对于相邻两个第二目标摄像头观测偏角相同且最小;
基于所述基准物体计算两个输入视频图像的重叠范围;
根据所述重叠范围分别将两个输入视频图像进行裁剪。
通过采用上述技术方案,精准的对相邻两摄像头获取的重叠视频图像进行裁剪以及 合成等操作,有助于防止生成的新视频图像缺失或重复。
可选的,所述控制指令包括输出分辨率,
所述处理所述输入像素,处理之后生成输出视频图像的具体步骤包括:
获取所述摄像头的预设分辨率;
计算所述预设分辨率与所述输出分辨率的比值;
根据所述比值从所有输入像素中选取多个基准像素;
基于所述基准像素的位置和所述比值得到输出像素的位置;
基于比值选取所述基准像素周围的周围像素;
获取所述基准像素和所述基准像素对应的周围像素的基础像素值,并根据所有基础像素值作为所述输出像素的基础像素值;
基于所述输出像素生成输出视频图像。
通过采用上述技术方案,通过对输入像素进行处理,从而实现对视频图像数字缩放,进而输出视频客户端需要的任意缩放比例的目标视频图像。
可选的,所述控制指令包括屏蔽范围;
所述处理所述输入像素,处理之后生成输出视频图像的具体步骤包括:
基于所述输入像素所对应摄像头的预设拍摄范围,获取所述屏蔽范围的起始角度和结束角度;
获取所述输入像素所对应摄像头观察像素点的视线角度;
分别将各个视线角度与所述起始角度以及所述结束角度进行比对;
判断所述视线角度是否位于所述起始角度以及所述结束角度之间;
若所述视线角度位于所述起始角度以及所述结束角度之间,则将对应的输入像素的像素值设置为预设的屏蔽像素值;
若所述视线角度位于所述起始角度以及所述结束角度之外,则不调整对应的输入像素的像素值;
基于所述像素值调整后的输入像素生成输出视频图像。
通过采用上述技术方案,屏蔽某些隐私区域,避免这个区域的物体影像出现在输出视频中,有助于得到视频客户端需要的目标视频图像。
可选的,所述处理所述输入像素,处理之后生成输出视频图像的具体步骤包括:
获取任意相邻两个所述摄像头的实际距离;
获取所述任意相邻两个所述摄像头观察所有输入像素所对应物体的第一视线角度 与第二视线角度;
基于所述实际距离、所述第一视线角度与所述第二视线角度,分别得到所有输入像素所对应的物体到所述摄像头组的相对距离;
判断所述相对距离是否小于预设的阈值;
若所述相对距离大于等于所述阈值,则将对应的输入像素的像素值设置为屏蔽像素值;
若所述相对距离小于所述阈值,则不调整对应的输入像素的像素值;
基于所述像素值调整后的输入像素生成输出视频图像。
通过采用上述技术方案,屏蔽某些隐私区域,避免这个区域的物体影像出现在输出视频中,有助于得到视频客户端需要的目标视频图像。
一种基于全向摄像头的监控系统,包括:
摄像装置和视频处理装置,所述摄像装置与所述视频处理装置连接,所述视频处理装置连接有视频客户端装置;
所述摄像装置用于获取视频图像,并将所述视频图像传送到所述视频处理装置;
所述视频处理装置用于接收所述视频图像以及所述视频客户端装置发送的控制指令,并根据所述控制指令对所述视频图像进行处理后生成目标视频图像,向所述视频客户端装置输出所述目标视频图像;
所述视频客户端装置用于向所述视频处理装置发送所述控制指令,并接收所述目标视频图像。
通过采用上述技术方案,根据视频客户端的需求,对视频图像进行裁剪、合成以及数字缩放操作处理后,向视频客户端传送任意朝向和缩放比例的目标视频图像。
可选的,还包括集成的音频装置、供电装置以及可控制移动装置。
通过采用上述技术方案,接收远端用户发送来的视频流并显示,并将音频流发送给远程用户,成为功能更强的全向视频会议终端。
一种机器可读存储介质,所述存储介质用于存储机器可执行程序代码,所述机器可执行程序代码被运行以执行权利要求1-7中任一项所述的方法。
综上所述,本申请包括以下至少一种有益技术效果:
摄像头组实现对环境全方位覆盖,视频处理器根据每个视频客户端的观看需求,对摄像头组获取的视频图像进行裁剪、合成以及数字缩放等操作,模拟一个虚拟摄像头,输出视频客户端需要的任意朝向和缩放比例的目标视频图像。
附图说明
图1是本申请实施例一种基于全向摄像头的监控方法的主要流程图;
图2是本申请实施例一种基于全向摄像头的监控方法中视频图像处理的整体示意图;
图3是本申请实施例一种基于全向摄像头的监控方法中相邻摄像头图像裁剪和拼合示意图;
图4是本申请实施例一种基于全向摄像头的监控方法中像素屏蔽示意图1;
图5是本申请实施例一种基于全向摄像头的监控方法中像素屏蔽示意图2;
图6是本申请实施例一种基于全向摄像头的监控方法中像素距离计算示意图;
图7是本申请实施例一种基于全向摄像头的监控系统中远程代理机器人的整体结构图。
图8是本申请实施例一种基于全向摄像头的监控系统的整体结构图。
具体实施方式
本申请实施例公开一种基于全向摄像头的监控方法。
参照图1和图2,一种基于全向摄像头的监控方法,包括步骤S1000-S5000:
步骤S1000:捕获摄像头组中所有摄像头所获取的视频图像。
步骤S2000:将所有视频图像分别存储于共享缓存区中。
其中,视频图像是以其像素矩阵的形式存储于共享缓存区中,每个像素包括像素的亮度和颜色,共享缓存区包括多个存储单元,每个摄像头所对应的视频图像分别存储于不同的存储单元中。且每个视频客户端的视频处理器都可以访问该共享缓存区。
步骤S3000:接收视频客户端发送的控制指令。
视频处理器接收来自视频客户端的控制指令,控制指令包括虚拟拍摄范围、屏蔽范围和输出分辨率等。
步骤S4000:基于控制指令从共享缓存区中提取视频图像并进行处理,得到目标视频图像。
对视频图像进行处理包括对视频图像进行画面拉拽移动、旋转、裁剪、拼合、数字缩放以及屏蔽等操作。
步骤S5000:发送目标视频图像至视频客户端。
其中,目标视频图像为视频客户端接收到的最终视频数据,即视频客户端需要的任意朝向角度和缩放比例的视频数据。
步骤S4000的具体步骤包括步骤S4100-S4300:
参照图3,步骤S4100:基于虚拟拍摄范围从对应共享缓存区中提取输入像素。
其中,虚拟拍摄范围为虚拟摄像头的拍摄覆盖范围,其中虚拟摄像头定义为:拍摄目标视频图像的摄像头,该摄像头是根据客户端需求虚拟得到的,而并非真实存在的;输入像素为摄像头组所获取的视频图像的像素。
参照2和3,步骤S4200:基于虚拟拍摄范围从共享缓存区中提取输入像素。
本实施例中,对视频图像的处理是通过对其像素处理实现的,对输入像素的处理包括对其裁剪、拼合以及数字缩放等操作。
步骤S4300:对输出视频图像进行压缩编码,得到目标视频图像。
对视频图像进行压缩编码后,生成视频客户端接受的视频压缩编码格式,如mjpg或者H.264等;具体的,在本实施例未记载的其他实施例中,也可以是通过外接处理器对视频图像进行压缩编码。
步骤S4100的具体步骤包括步骤S4110-S4140:
步骤S4110:获取摄像头组中每个摄像头的预设拍摄范围。
本实施例中,摄像头组中的各摄像头均为固定设置,因此其预设拍摄范围也是固定的。
步骤S4120:将预设拍摄范围与虚拟拍摄范围进行比对。
步骤S4130:判断虚拟拍摄范围是否完全落入任一预设拍摄范围中。
其中,虚拟拍摄范围完全落入任一摄像头的预设拍摄范围中,即虚拟拍摄范围仅仅出现于单个摄像头的预设拍摄范围内;虚拟拍摄范围未完全落入任一摄像头的预设拍摄范围中,即虚拟拍摄范围需要多个摄像头的预设拍摄范围才能完全覆盖。
步骤S4140:若是,从第一目标摄像头所对应的共享缓存区中提取输入像素,第一目标摄像头为预设拍摄范围包含虚拟拍摄范围的摄像头。
若不是,从所有第二目标摄像头所对应的共享缓存区中提取视频图像,第二目标摄像头为预设拍摄范围不包含虚拟拍摄范围,但是与虚拟拍摄范围有交集的摄像头;裁剪所有第二目标摄像头视频图像中的重叠视频图像;将裁剪剩下的视频图像拼合,形成新视频图像;基于虚拟拍摄范围从新视频图像中提取输入像素。
具体的,本实施例中所有视频图像均存储于一个共享缓存区中,第一目标摄像头所对应的共享缓存区指的是:存储第一目标摄像头所获取的视频图像的存储单元;第二目标摄像头所对应的共享缓存区指的是:存储第二目标摄像头所获取的视频图像的存储单 元。
其中,裁剪所有第二目标摄像头视频图像中的重叠视频图像的具体步骤包括步骤S4141-S4143:
步骤S4141:识别相邻两个第二目标摄像头的输入视频图像中的基准物体,基准物体相对于相邻两个第二目标摄像头观测偏角相同且最小。
步骤S4142:基于基准物体计算两个输入视频图像的重叠范围。
步骤S4143:根据重叠范围分别将两个输入视频图像进行裁剪。
其中,本实施例中,摄像头组支持水平360°和垂直180°的视频覆盖,以水平方向的视频合成为例,在水平方向,使用6个120°视角的摄像头,分为奇数编号组和偶数编号组,两组摄像头形成360°交叉覆盖。例如,当虚拟拍摄范围正好完全落在3号摄像头的预设拍摄范围中,且仅与3号摄像头的预设拍摄范围有交集,则视频处理器可以直接在3号摄像头所对应的共享缓存区中提取输入像素。
又例如,当虚拟拍摄范围同时需要3号与4号摄像头的预设拍摄范围才能完全覆盖,则将3号和4号摄像头获取的重叠视频图像进行裁剪,假设两视频图像的重叠像素数量为L,则将两个视频图像的重叠部分各自裁剪掉互补的二分之一L个像素;如裁剪掉第一个视频图像中重叠部分左边二分之一L个像素,以及裁剪掉第二个视频图像中重叠部分右边二分之一L个像素;再将裁剪剩余的视频图像拼合形成新的视频图像,然后再从共享缓存区中提取新视频图像的输入像素。
这种方法有助于减少图像内容损失,但远处的物体在剪切线附近会出现重复现象。为解决这个问题,可以增加用户指定聚焦物体控制。然后在步骤S4141中,确保用户指定物体完全重叠,并按这个重叠程度进行剪裁和拼合。这样做拼合,聚焦物体更完整而且连续。
步骤S4200的具体步骤包括步骤S4210a-S4270a:
步骤S4210a:获取摄像头的预设分辨率。
本实施例中,摄像头组中的所有摄像头型号均一致。
步骤S4220a:计算预设分辨率与输出分辨率的比值。
输出分辨率即输出图像的分辨率,控制指令中包括输出分辨率,所以可以直接获取到输出分辨率,本实施例中比值为n。
步骤S4230a:根据比值从所有输入像素中选取多个基准像素。
具体的,基准像素的个数取决于输入像素大小以及比值n的大小确定。
步骤S4240a:基于基准像素的位置和比值得到输出像素的位置。
对于位置为(x,y)的基准像素,比值为n,则其输出像素位置为(n*x,n*y)。
步骤S4250a:基于比值选取基准像素周围的周围像素。
选取基准像素周围n 2-1个周围像素。
步骤S4260a:获取基准像素和基准像素对应的周围像素的基础像素信息值,并根据所有基础像素信息值作为输出像素的基础像素信息值。
基础像素信息值包括像素的颜色和亮度。
步骤S4270a:基于输出像素生成输出视频图像。
参照图4,步骤4S4200的具体步骤包括步骤S4210b-S4270b。
步骤S4210b:基于输入像素所对应摄像头的预设拍摄范围,获取屏蔽范围的起始角度和结束角度。
其中,屏蔽范围为屏蔽区域的像素范围,其中屏蔽区域定义为:目标视频图像中需要被屏蔽的区域。
步骤S4220b:获取输入像素所对应摄像头观察像素点的视线角度。
具体的,像素点即像素所在位置。
本实施例中,以摄像头中心点为原点,建立平面坐标系,以摄像头中心点向屏蔽区域作切线,与横坐标之间形成的角度差最大的两个角,即为屏蔽范围的起始角与屏蔽像素的结束角;其余横坐标之间的角度,即为屏蔽范围的起始角度s与屏蔽像素的结束角度e;以摄像头中心点向像素点作连线,与横坐标形成的角度即摄像头观察像素点的视线角度f。
步骤S4240b:分别将各个视线角度与起始角度以及结束角度进行比对。
将a与s以及e进行比较。
步骤S4250b:判断视线角度是否位于起始角度以及结束角度之间。
步骤S4260b:若视线角度位于起始角度以及结束角度之间,则将对应的输入像素的像素值设置为预设的屏蔽像素值;若视线角度位于起始角度以及结束角度之外,则不调整对应的输入像素的像素值。
本实施例中,视线角度等于起始角度或者视线角度等于结束角度,也属于起始角度以及结束角度之间,即当s<=f<=e时,将像素点的像素值设置为屏蔽像素值,也即是屏蔽掉所有起始角度以及结束角度之间的像素点。
步骤S4270b:基于像素值调整后的输入像素生成输出视频图像。
参照图5和图6,步骤S4200的具体步骤包括步骤S4210c-S4260c:
步骤S4210c:获取任意相邻两个摄像头的实际距离。
本实施例中,至少包括两个摄像头,实际距离即两个摄像头中心点到中心点之间的距离。
步骤S4220c:获取所述任意相邻两个摄像头观察所有输入像素所对应物体的第一视线角度与第二视线角度。
本实施例中,第一视线角度与第二视线角度的大小与视频图像中虚拟摄像头观察所有输入像素的第一视线角度与第二视线角度的角度相等,因此可以通过该输入像素到输出视频图像边缘的距离计算得到。
步骤S4240c:基于所述实际距离、所述第一视线角度与所述第二视线角度,分别得到所有输入像素所对应的物体到所述摄像头组的相对距离。
相对距离满足公式:
D=tan(a)*tan(b)/(tan(a)+tan(b))*d;
其中,a为第一视线角度,b为第二视线角度,d为任意两个摄像头的实际距离。具体的,本实施例中由于摄像头组中相邻两个摄像头甚至所有摄像头的焦点距离都不大,因此用输入像素所对应的物体到摄像图组的整体中心点的距离替换相对距离。
步骤S4240c:判断相对距离是否小于预设的阈值。
本实施例中,预设的阈值S大小等于从阈值方程上的点到摄像头的距离,根据视频客户端发送的控制指令,确定方程的范围,模拟虚拟隔板。
以二元一次方程y 1=kx 1+b为例,以摄像头中心点为原点设置直角坐标系,连接像素点与原点,构成方程y 2=kx 2,计算出两方程交点坐标,再根据交点坐标,计算出沿摄像头观察像素点视线的阈值。
例如,x 1=-1(-2≤y 1≤-1),y 2=x 2,那么阈值为
Figure PCTCN2022076339-appb-000001
步骤S4250c:若相对距离大于等于阈值,则将对应的输出像素的像素值设置为屏蔽像素值;若相对距离小于阈值,则不调整对应的输出像素的像素值。
若D≥S,即处于虚拟隔板上或后面的像素点,将像素点的像素值设置为屏蔽像素值。
步骤S4260c:基于像素值调整后的输出像素生成输出视频图像。
具体的,本实施例未记载的其他实施例中,屏蔽也可以在输出图像上进行,还可以把物体还原成线面再作屏蔽计算。
本申请实施例一种基于全向摄像头的监控方法的实施原理为:视频处理器捕获摄像头组中每个摄像头获取的视频图像并存储于共享缓存区中,基于视频客户端发送的控制指令对视频图像进行裁剪、拼合、数字缩放以及屏蔽等处理,经过编码压缩,输出视频客户端需要的任意朝向和缩放比例的目标视频图像。
基于一种基于全向摄像头的监控方法,本申请还公开了一种基于全向摄像头的监控系统。
参照图2,一种基于全向摄像头的监控系统,包括:
摄像装置和视频处理装置,摄像装置与视频处理装置连接,视频处理装置连接有视频客户端装置;
摄像装置用于获取视频图像,并将视频图像传送到视频处理装置;
视频处理装置用于接收视频图像以及视频客户端装置发送的控制指令,并根据控制指令对视频图像进行处理后生成目标视频图像,向视频客户端装置输出目标视频图像;
视频客户端装置用于向视频处理装置发送控制指令,并接收目标视频图像。
本实施例中,摄像装置包括摄像头组,视频处理装置包括视频处理器,视频客户端装置包括视频客户端;摄像装置与视频处理装置之间可以电连接或通信连接,视频处理装置与视频客户端装置之间可以采用电连接或通信连接。
本实施例中,对视频图像进行处理的操作内容包括对视频图像进行裁剪、拼合、数字缩放以及屏蔽等。
视频处理器包括视频图像捕获模块、图像合成模块以及视频压缩编码模块。
视频图像捕获模块捕获高清晰度视频图像,图像合成模块合成输出视频图像,视频压缩编码模块对视频图像进行压缩编码,然后生成视频客户端接受的视频压缩编码格式,如mjpg或者H.264等。
具体的,本实施例中视频客户端有多个,且均可根据各自需求发送不同控制指令,从而获取内容不同的目标视频图像。
参照图7,一种基于全向摄像头的监控系统还包括集成的音频装置、供电装置以及可控制移动装置,从而构成远程代理机器人。
在产品演示、测试,或其他现场环境中,如果工作人员不能现场参加,可通过远程代理机器人参加。
参照图8,音频装置包括麦克风、扬声器、显示屏、无线网络接口(WIFI或移动数据网络);供电装置包括电池以及配套的充电和供电电路模块;可控制移动装置包括可 控移动基座、轮子、驱动马达、升降支柱马达等,采用一个主控CPU以及配套驱动电路控制全向摄像头、音频装置以及可控制移动装置。
主控CPU运行的软件接收用户通过网络发送过来的控制指令,根据控制指令控制驱动马达,驱动远程代理运动。该软件接收远端用户发送来的视频流,并显示到屏幕上。该软件还把麦克和摄像头组的声音和视频图像发送给远程用户。
基于一种基于全向摄像头的监控方法,本申请还公开了一种基于全向摄像头的监控系统。
本申请实施例一种基于全向摄像头的监控系统的实施原理为:根据视频客户端的需求,对视频图像进行裁剪、合成、数字缩放以及屏蔽等操作后,向视频客户端传送任意朝向和缩放比例的目标视频图像。
基于一种基于全向摄像头的监控方法,本申请还公开了一种机器可读存储介质。
一种机器可读存储介质,存储介质用于存储机器可执行程序代码,机器可执行程序代码被运行以执行基于全向摄像头的监控方法。
以上均为本申请的较佳实施例,并非依此限制本申请的保护范围,故:凡依本申请的结构、形状、原理所做的等效变化,均应涵盖于本申请的保护范围之内。

Claims (10)

  1. 一种基于全向摄像头的监控方法,其特征在于,包括:
    捕获摄像头组中所有摄像头所获取的视频图像;
    将所有视频图像分别存储于共享缓存区中;
    接收视频客户端发送的控制指令;
    基于所述控制指令从所述共享缓存区中提取所述视频图像并进行处理,得到目标视频图像;
    发送所述目标视频图像至所述视频客户端。
  2. 根据权利要求1所述的一种基于全向摄像头的监控方法,其特征在于,所述控制指令包括虚拟拍摄范围;
    所述基于所述控制指令从所述共享缓存区中提取所述视频图像并进行处理,得到目标视频图像的具体步骤包括:
    基于所述虚拟拍摄范围从所述共享缓存区中提取输入像素;
    处理所述输入像素,处理之后生成输出视频图像;
    对所述输出视频图像进行压缩编码,得到所述目标视频图像。
  3. 根据权利要求2所述的一种基于全向摄像头的监控方法,其特征在于,所述基于所述虚拟拍摄范围从所述共享缓存区中提取输入像素的具体步骤包括:
    获取所述摄像头组中每个摄像头的预设拍摄范围;
    将所述预设拍摄范围与所述虚拟拍摄范围进行比对;
    判断所述虚拟拍摄范围所示范围是否完全落入任一所述预设拍摄范围中;
    若是,从第一目标摄像头所对应的共享缓存区中提取输入像素,所述第一目标摄像头为所述预设拍摄范围包含所述虚拟拍摄范围的摄像头;
    若不是,从所有第二目标摄像头所对应的共享缓存区中提取视频图像,所述第二目标摄像头为所述预设拍摄范围不包含所述虚拟拍摄范围,但是与所述虚拟拍摄范围有交集的摄像头;裁剪所有第二目标摄像头视频图像中的重叠视频图像;将裁剪剩下的视频图像拼合,形成新视频图像;基于所述虚拟拍摄范围从所述新视频图像中提取输入像素。
  4. 根据权利要求3所述的一种基于全向摄像头的监控方法,其特征在于,所述裁剪所有第二目标摄像头视频图像中的重叠视频图像的具体步骤包括:
    识别相邻两个第二目标摄像头的输入视频图像中的基准物体,所述基准物体相对于相邻两个第二目标摄像头观测偏角相同且最小;
    基于所述基准物体计算两个输入视频图像的重叠范围;
    根据所述重叠范围分别将两个输入视频图像进行裁剪。
  5. 根据权利要求2所述的一种基于全向摄像头的监控方法,其特征在于,所述控制指令包括输出分辨率,
    所述处理所述输入像素,处理之后生成输出视频图像的具体步骤包括:
    获取所述摄像头的预设分辨率;
    计算所述预设分辨率与所述输出分辨率的比值;
    根据所述比值从所有输入像素中选取多个基准像素;
    基于所述基准像素的位置和所述比值得到输出像素的位置;
    基于比值选取所述基准像素周围的周围像素;
    获取所述基准像素和所述基准像素对应的周围像素的基础像素值,并根据所有基础像素值作为所述输出像素的基础像素值;
    基于所述输出像素生成输出视频图像。
  6. 根据权利要求2所述的一种基于全向摄像头的监控方法,其特征在于,所述控制指令包括屏蔽范围;
    所述处理所述输入像素,处理之后生成输出视频图像的具体步骤包括:
    基于所述输入像素所对应摄像头的预设拍摄范围,获取所述屏蔽范围的起始角度和结束角度;
    获取所述输入像素所对应摄像头观察像素点的视线角度;
    分别将各个视线角度与所述起始角度以及所述结束角度进行比对;
    判断所述视线角度是否位于所述起始角度以及所述结束角度之间;
    若所述视线角度位于所述起始角度以及所述结束角度之间,则将对应的输入像素的像素值设置为预设的屏蔽像素值;
    若所述视线角度位于所述起始角度以及所述结束角度之外,则不调整对应的输入像素的像素值;
    基于所述像素值调整后的输入像素生成输出视频图像。
  7. 根据权利要求2所述的一种基于全向摄像头的监控方法,其特征在于,所述处理所述输入像素,处理之后生成输出视频图像的具体步骤包括:
    获取任意相邻两个所述摄像头的实际距离;
    获取所述任意相邻两个所述摄像头观察所有输入像素所对应物体的第一视线角度与第二视线角度;
    基于所述实际距离、所述第一视线角度与所述第二视线角度,分别得到所有输入像素所对应的物体到所述摄像头组的相对距离;
    判断所述相对距离是否小于预设的阈值;
    若所述相对距离大于等于所述阈值,则将对应的输入像素的像素值设置为屏蔽像素值;
    若所述相对距离小于所述阈值,则不调整对应的输入像素的像素值;
    基于所述像素值调整后的输入像素生成输出视频图像。
  8. 一种基于全向摄像头的监控系统,其特征在于,包括:
    摄像装置和视频处理装置,所述摄像装置与所述视频处理装置连接,所述视频处理装置连接有视频客户端装置;
    所述摄像装置用于获取视频图像,并将所述视频图像传送到所述视频处理装置;
    所述视频处理装置用于接收所述视频图像以及所述视频客户端装置发送的控制指令,并根据所述控制指令对所述视频图像进行处理后生成目标视频图像,向所述视频客户端装置输出所述目标视频图像;
    所述视频客户端装置用于向所述视频处理装置发送所述控制指令,并接收所述目标视频图像。
  9. 根据权利要求8所述的一种基于全向摄像头的监控系统,其特征在于,还包括集成的音频装置、供电装置以及可控制移动装置。
  10. 一种机器可读存储介质,其特征在于,所述存储介质用于存储机器可执行程序代码,所述机器可执行程序代码被运行以执行权利要求1-7中任一项所述的方法。
PCT/CN2022/076339 2021-12-24 2022-02-15 一种基于全向摄像头的监控方法及其系统 WO2023115685A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111604974.7 2021-12-24
CN202111604974.7A CN114189660A (zh) 2021-12-24 2021-12-24 一种基于全向摄像头的监控方法及其系统

Publications (1)

Publication Number Publication Date
WO2023115685A1 true WO2023115685A1 (zh) 2023-06-29

Family

ID=80545004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/076339 WO2023115685A1 (zh) 2021-12-24 2022-02-15 一种基于全向摄像头的监控方法及其系统

Country Status (2)

Country Link
CN (1) CN114189660A (zh)
WO (1) WO2023115685A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117255180B (zh) * 2023-11-20 2024-02-09 山东通广电子股份有限公司 一种智能安全监控设备及监控方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104539847A (zh) * 2014-12-26 2015-04-22 宇龙计算机通信科技(深圳)有限公司 一种全景拍照方法及移动终端
CN104754228A (zh) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 一种利用移动终端摄像头拍照的方法及移动终端
US20160063705A1 (en) * 2014-08-28 2016-03-03 Qualcomm Incorporated Systems and methods for determining a seam
CN106331603A (zh) * 2016-08-18 2017-01-11 深圳市瑞讯云技术有限公司 视频监控方法、装置、服务器及系统
CN111556283A (zh) * 2020-03-18 2020-08-18 深圳市华橙数字科技有限公司 监控摄像头管理方法、装置、终端及存储介质
CN112188163A (zh) * 2020-09-29 2021-01-05 厦门汇利伟业科技有限公司 一种实时视频图像自动去重拼接的方法和系统
CN112468832A (zh) * 2020-10-22 2021-03-09 北京拙河科技有限公司 一种十亿级像素全景视频直播方法、装置、介质及设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004032459A (ja) * 2002-06-27 2004-01-29 Hitachi Ltd 監視システム、およびこれに用いるコントローラと監視端末
TWI417639B (zh) * 2009-12-30 2013-12-01 Ind Tech Res Inst 全周鳥瞰影像無縫接合方法與系統
EP2741237B1 (en) * 2012-10-11 2017-08-09 Huawei Technologies Co., Ltd. Method, apparatus and system for implementing video occlusion
CN104159026A (zh) * 2014-08-07 2014-11-19 厦门亿联网络技术股份有限公司 一种实现360度全景视频的系统
CN104980697A (zh) * 2015-04-28 2015-10-14 杭州普维光电技术有限公司 一种网络摄像机视频传输方法
CN105141894A (zh) * 2015-08-06 2015-12-09 四川九洲电器集团有限责任公司 一种无线监控系统及其无线监控设备、中心服务器
CN106657871A (zh) * 2015-10-30 2017-05-10 中国电信股份有限公司 基于视频拼接的多角度动态视频监控方法及装置
CN107343165A (zh) * 2016-04-29 2017-11-10 杭州海康威视数字技术股份有限公司 一种监控方法、设备及系统
CN107529021B (zh) * 2017-10-18 2024-05-03 北京伟开赛德科技发展有限公司 隧道型全景视频采集、分发、定位跟踪系统及其方法
US10848687B2 (en) * 2018-10-05 2020-11-24 Facebook, Inc. Modifying presentation of video data by a receiving client device based on analysis of the video data by another client device capturing the video data
TWI716009B (zh) * 2019-06-21 2021-01-11 晶睿通訊股份有限公司 影像校正方法及其相關監控攝影系統
CN112597337A (zh) * 2020-12-10 2021-04-02 北京飞讯数码科技有限公司 一种视频监控内容的查看方法、装置、设备及介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063705A1 (en) * 2014-08-28 2016-03-03 Qualcomm Incorporated Systems and methods for determining a seam
CN104539847A (zh) * 2014-12-26 2015-04-22 宇龙计算机通信科技(深圳)有限公司 一种全景拍照方法及移动终端
CN104754228A (zh) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 一种利用移动终端摄像头拍照的方法及移动终端
CN106331603A (zh) * 2016-08-18 2017-01-11 深圳市瑞讯云技术有限公司 视频监控方法、装置、服务器及系统
CN111556283A (zh) * 2020-03-18 2020-08-18 深圳市华橙数字科技有限公司 监控摄像头管理方法、装置、终端及存储介质
CN112188163A (zh) * 2020-09-29 2021-01-05 厦门汇利伟业科技有限公司 一种实时视频图像自动去重拼接的方法和系统
CN112468832A (zh) * 2020-10-22 2021-03-09 北京拙河科技有限公司 一种十亿级像素全景视频直播方法、装置、介质及设备

Also Published As

Publication number Publication date
CN114189660A (zh) 2022-03-15

Similar Documents

Publication Publication Date Title
CN113038111B (zh) 用于图像捕获和处理的方法、系统和介质
TWI535285B (zh) Conference system, surveillance system, image processing device, image processing method and image processing program, etc.
US9398214B2 (en) Multiple view and multiple object processing in wide-angle video camera
US9602700B2 (en) Method and system of simultaneously displaying multiple views for video surveillance
WO2012151777A1 (zh) 多目标跟踪特写拍摄视频监控系统
KR101776702B1 (ko) 3차원 영상을 생성하는 감시 카메라 및 그 방법
KR101685418B1 (ko) 3차원 영상을 생성하는 감시 시스템
EP3451649B1 (en) Method and apparatus for generating indoor panoramic video
CN110072058B (zh) 图像拍摄装置、方法及终端
CN103430531B (zh) 视频处理装置、视频处理系统和视频处理方法
KR101778744B1 (ko) 다중 카메라 입력의 합성을 통한 실시간 모니터링 시스템
JP5963006B2 (ja) 画像変換装置、カメラ、映像システム、画像変換方法およびプログラムを記録した記録媒体
WO2023115685A1 (zh) 一种基于全向摄像头的监控方法及其系统
CN107147882A (zh) 一种自动实时跟踪目标对象的多分辨率观测系统
KR20100121086A (ko) 음원인식을 이용한 촬영영상 추적 ptz 카메라 운용시스템 및 그 방법
JP2018033107A (ja) 動画の配信装置及び配信方法
KR101806840B1 (ko) 다수의 카메라를 이용한 고해상도 360도 동영상 생성 시스템
JP6004978B2 (ja) 被写体画像抽出装置および被写体画像抽出・合成装置
US11936920B2 (en) Method and system for transmitting a video stream
JP2021117924A (ja) 画像処理装置、画像処理システム、撮像装置、画像処理方法およびプログラム
JP2003069990A (ja) 遠隔映像認識システム
JP2005142765A (ja) 撮像装置及び方法
WO2021083150A1 (zh) 变焦方法、装置、飞行器、飞行系统及存储介质
KR20170011928A (ko) 초광각 카메라를 이용한 렌즈 왜곡 영상 보정 카메라 시스템 및 그가 적용된 tvi 장치
WO2021077279A1 (zh) 一种图像处理方法、设备、成像系统及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909029

Country of ref document: EP

Kind code of ref document: A1