WO2020035080A1 - 一种追踪摄像方法、装置及终端设备 - Google Patents

一种追踪摄像方法、装置及终端设备 Download PDF

Info

Publication number
WO2020035080A1
WO2020035080A1 PCT/CN2019/107878 CN2019107878W WO2020035080A1 WO 2020035080 A1 WO2020035080 A1 WO 2020035080A1 CN 2019107878 W CN2019107878 W CN 2019107878W WO 2020035080 A1 WO2020035080 A1 WO 2020035080A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
tracking
sound signal
sound source
Prior art date
Application number
PCT/CN2019/107878
Other languages
English (en)
French (fr)
Inventor
李耀伟
吴海全
邱振青
张恩勤
曹磊
师瑞文
Original Assignee
深圳市冠旭电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市冠旭电子股份有限公司 filed Critical 深圳市冠旭电子股份有限公司
Publication of WO2020035080A1 publication Critical patent/WO2020035080A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone

Definitions

  • the invention belongs to the technical field of smart speakers, and particularly relates to a tracking camera method, a device and a terminal device.
  • the embodiments of the present invention provide a tracking camera method, a device and a terminal device to solve the problem that most existing speakers have only one camera and cannot rotate. Even if the smart speaker is equipped with a camera with a pan / tilt head, the rotation speed of the pan / tilt head is very slow, which cannot meet the user's demand for tracking and shooting people in a multi-person video call through the smart speaker.
  • a first aspect of the embodiments of the present invention provides a tracking camera method, including:
  • a second aspect of the embodiments of the present invention provides a tracking camera device, including:
  • a positioning module configured to locate a sound source position of the sound signal if a sound signal is detected
  • a calling module configured to call a camera according to the sound source position
  • a first control module configured to control the camera to capture an image of the sound source position
  • a determining module configured to perform face recognition on the image, and determine a face in the image that emits the sound signal
  • a second control module is configured to control the camera to track and shoot the face of the person who emits the sound signal.
  • a third aspect of the embodiments of the present invention provides a tracking camera terminal device including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer
  • the program implements the steps of the method described above.
  • a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the foregoing method.
  • the embodiment of the present invention detects a sound signal, locates a sound source position, and controls a camera to capture an image of the sound source position, performs face recognition on the image to determine a face that emits a sound signal, and then controls the camera to track and shoot a face that emits a sound signal.
  • a multi-person video call it can quickly track and shoot users who emit sound signals.
  • FIG. 1 is a schematic flowchart of a tracking camera method provided by Embodiment 1 of the present invention
  • Embodiment 2 is a schematic flowchart of a tracking camera method provided by Embodiment 2 of the present invention
  • FIG. 3 is a schematic flowchart of a tracking camera method according to a third embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a tracking camera device according to a fourth embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a calling module according to a fifth embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a determination module according to a sixth embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a terminal device according to a seventh embodiment of the present invention.
  • this embodiment provides a tracking camera method, which can be applied to smart terminal devices such as smart speakers, mobile phones, and tablet computers.
  • the tracking camera method provided in this embodiment includes:
  • the sound source position of the sound signal can be located by sound localization technology.
  • TDOA Time Difference If Arrival
  • TDOA positioning is a method of positioning using time difference. By measuring the time it takes for the signal to arrive at the monitoring station, the distance of the signal source can be determined.
  • the method before the locating the sound source position of the sound signal, the method includes:
  • the sound signal is a human voice
  • an operation of locating a sound source position of the sound signal is performed.
  • the corresponding sound in the detected sound signal is not necessarily a sound made by a human body, but may be a sound made by an animal or other sounding object. If a sound signal is detected through a microphone or other communication device, it is possible to determine whether the corresponding sound in the sound signal is a human voice by using various technologies such as voice recognition technology, infrared sensing technology, and image recognition technology, to determine whether the human body is making a sound . If the human body is making sound, the position of the sound source can be located by locating the sound source position of the sound signal.
  • a camera that meets the conditions is called according to the sound source position to photograph the user who issued the sound signal.
  • the camera is controlled to capture images within a certain range including the position of the sound source.
  • the range can be taken with the position of the sound source as the center and a radius set according to the distance actually set by the user.
  • S104 Perform face recognition on the image, and determine a face in the image that emits the sound signal.
  • face recognition is performed on the captured image, and it is determined whether the captured face in the image is a face that emits a sound signal, so as to determine the user who issued the sound signal.
  • S105 Control the camera to track and shoot the face of the person who sends the sound signal.
  • the camera is controlled to shoot the user who sends out the sound signal, and during the shooting process, the user who sends out the sound signal is tracked until the utterance of the user who sends out the sound signal ends.
  • step S105 the method further includes:
  • the image data of the human face emitting sound signals captured by the camera is received, the image data is processed; then the layers of all image data are synthesized to synthesize into the target image data, and then the target image
  • the data is compressed and transmitted to the peer end of the user who makes the video call over the network.
  • the downsampling process refers to the process of reducing the sampling rate of a specific signal, and is usually used to reduce the data transmission rate or data size.
  • Other devices refer to any device with communication functions that is the same as or different from the current device.
  • step S102 includes:
  • the camera after determining the location of the sound source, by detecting whether there is a camera that has not performed a tracking task among multiple cameras, that is, the camera has never performed a tracking task in the current call so far.
  • the usage rate of the camera is 0. If not, it is detected whether a plurality of cameras are not currently in a state of performing a tracking task.
  • step S1021 includes:
  • step S1021 further includes:
  • the usage frequency of each camera in all cameras is detected and judged. If there is a camera with a frequency less than a preset frequency threshold, the camera with a frequency less than the frequency threshold is called. If there is no camera with a frequency less than a preset frequency threshold, then the distances between all cameras and the sound source position are detected, and the camera closest to the sound source position among all the cameras is selected to track and shoot the user who issued the sound signal .
  • the frequency threshold refers to the lowest value of the average frequency of the camera used in the video call set by the user according to the actual situation.
  • a plurality of pan / tilt heads are added, and multiple cameras are installed on each pan / tilt head.
  • the frequency of use of the camera and distance information from the sound source position are determined and the best camera currently used for tracking shooting is called, which effectively improves It improves the speed and efficiency of camera call, and improves the practicability of camera tracking video.
  • step S104 includes:
  • S1041. Perform face recognition on the image to determine the number of faces in the image.
  • face recognition is performed on the images captured by the camera, and the number of faces in the images captured by the camera is determined according to the results of the face recognition.
  • the number of faces identified in the image is equal to one, then it can be determined that the faces in the image captured by the camera are the faces emitting sound signals, that is, the users who emit sound signals.
  • the mouth of each face image in the image is located.
  • the mouth of each face in the image can be located, and the mouth of each face can be used to determine whether the mouth of the face is emitting a sound signal, thereby determining which face is emitting a sound signal.
  • Human face
  • This embodiment can effectively determine the current user who issued the sound signal based on the recognition of face recognition and mouth movement, and lock the user for tracking shooting with the camera, which improves the integrity and clarity of the video call during the tracking shooting.
  • this embodiment provides a tracking camera device 100 for performing the method steps in the first embodiment.
  • the tracking camera device 100 provided in this embodiment includes:
  • a positioning module 101 configured to locate a sound source position of the sound signal if a sound signal is detected
  • a calling module 102 configured to call a camera according to the sound source position
  • a first control module 103 configured to control the camera to capture an image of the sound source position
  • a determining module 104 configured to perform face recognition on the image, and determine a face in the image that emits the sound signal
  • the second control module 105 is configured to control the camera to track and shoot the face of the person who emits the sound signal.
  • the tracking camera device 100 further includes:
  • the processing module is configured to process the image data if the image data of the human face emitting the sound signal captured by the camera is received.
  • a transmission module configured to synthesize a plurality of the image data into target image data, and transmit the target image data.
  • the tracking camera device 100 further includes:
  • a judging module configured to judge whether the sound signal is a human voice
  • An execution module is configured to perform an operation of locating a sound source position of the sound signal if the sound signal is human voice.
  • the calling module 102 in the third embodiment includes:
  • a first determining unit 1021 configured to determine whether there is a camera that has not performed a tracking task
  • a first detection unit 1022 configured to detect the distances between all cameras that have not performed a tracking task and the sound source position if there are cameras that have not performed a tracking task;
  • the first calling unit 1023 is configured to call a camera that is closest to the sound source position among all cameras that have not performed the tracking task.
  • the calling module 102 further includes:
  • a second determining unit configured to determine whether the use frequency of each camera in all cameras is less than a frequency threshold if there is no camera that has not performed a tracking task
  • the second calling unit is configured to call a camera whose use frequency is less than the frequency threshold if there is a camera whose use frequency is less than the frequency threshold.
  • the calling module 102 further includes:
  • a second detection unit configured to detect the distances between all cameras and the sound source position if no cameras with a frequency less than a frequency threshold are used;
  • the third calling unit is configured to call a camera closest to the sound source position.
  • a plurality of pan / tilt heads are added, and multiple cameras are installed on each pan / tilt head.
  • the frequency of use of the camera and distance information from the sound source position are determined and the best camera currently used for tracking shooting is called, which effectively improves It improves the speed and efficiency of camera call, and improves the practicability of camera tracking video.
  • the determining module 104 in the third embodiment includes:
  • the first determining unit 1041 is configured to perform face recognition on the image, and determine the number of faces in the image.
  • the second determining unit 1042 is configured to determine that, if the number of faces in the image is equal to one, the faces in the image are faces that emit the sound signal.
  • the positioning unit 1043 is configured to locate the mouth of each face in the image if the number of faces in the image is greater than one.
  • the third determining unit 1044 is configured to determine a human face in the image that emits the sound signal according to a mouth motion of locating a human face in the image.
  • This embodiment can effectively determine the current user who issued the sound signal based on the recognition of face recognition and mouth movements, and lock the user for tracking shooting with the camera, which improves the integrity and clarity of the video call during the tracking shooting.
  • FIG. 7 is a schematic diagram of a tracking camera terminal device according to an embodiment of the present invention.
  • the tracking camera terminal device 7 of this embodiment includes a processor 70, a memory 71, and a computer program 72, such as a tracking camera program, stored in the memory 71 and executable on the processor 70. .
  • the processor 70 executes the computer program 72, the steps in the foregoing embodiments of the tracking and imaging method are implemented, for example, steps S101 to S105 shown in FIG.
  • the processor 70 executes the computer program 72
  • the functions of each module / unit in the foregoing device embodiments are implemented, for example, the functions of modules 101 to 105 shown in FIG. 4.
  • the computer program 72 may be divided into one or more modules / units, and the one or more modules / units are stored in the memory 71 and executed by the processor 70 to complete this invention.
  • the one or more modules / units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 72 in the tracking camera terminal device 7.
  • the computer program 72 may be divided into a positioning module, a calling module, a first control module, a determination module, and a second control module.
  • the specific functions of each module are as follows:
  • a positioning module configured to locate a sound source position of the sound signal if a sound signal is detected
  • a calling module configured to call a camera according to the sound source position
  • a first control module configured to control the camera to capture an image of the sound source position
  • a determining module configured to perform face recognition on the image, and determine a face in the image that emits the sound signal
  • a second control module is configured to control the camera to track and shoot the face of the person who emits the sound signal.
  • the tracking camera terminal device 7 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the tracking camera terminal device may include, but is not limited to, a processor 70 and a memory 71.
  • FIG. 7 is only an example of the tracking camera terminal device 7 and does not constitute a limitation on the tracking camera terminal device 7.
  • the tracking camera terminal device 7 may include more or fewer components than shown, or a combination of some components. Or different components, for example, the tracking camera terminal device may further include an input / output device, a network access device, a bus, and the like.
  • the processor 70 may be a central processing unit (Central Processing Unit (CPU), or other general-purpose processors, digital signal processors (DSPs), and application-specific integrated circuits (Applications) Specific Integrated Circuit (ASIC), off-the-shelf Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • CPU Central Processing Unit
  • DSP digital signal processor
  • ASIC application-specific integrated circuits
  • FPGA off-the-shelf Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 71 may be an internal storage unit of the tracking camera terminal device 7, such as a hard disk or a memory of the tracking camera terminal device 7.
  • the memory 71 may also be an external storage device of the tracking camera terminal device 7, such as a plug-in hard disk, a smart media card (SMC), and a secure digital card ( Secure Digital (SD), Flash Card, etc. Further, the memory 71 may further include both an internal storage unit of the tracking camera terminal device 7 and an external storage device.
  • the memory 71 is configured to store the computer program and other programs and data required by the tracking camera terminal device.
  • the memory 71 may also be used to temporarily store data that has been output or is to be output.
  • the disclosed apparatus / terminal device and method may be implemented in other ways.
  • the device / terminal device embodiments described above are only schematic.
  • the division of the modules or units is only a logical function division.
  • components can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated module / unit When the integrated module / unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on such an understanding, the present invention implements all or part of the processes in the methods of the above embodiments, and may also be completed by a computer program instructing related hardware.
  • the computer program may be stored in a computer-readable storage medium.
  • the computer When the program is executed by a processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signals telecommunication signals
  • software distribution media any entity or device capable of carrying the computer program code
  • a recording medium a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本申请适用于智能音箱技术领域,提供了一种追踪摄像方法、装置及终端设备,方法包括:若检测到声音信号,则定位所述声音信号的声源位置,并根据所述声源位置调用摄像头,控制所述摄像头拍摄所述声源位置的图像;对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。本申请通过检测声音信号,定位声源位置并控制摄像头拍摄声源位置的图像,对图像进行人脸识别确定发出声音信号的人脸,再控制摄像头对发出声音信号的人脸进行追踪拍摄,在进行多人视频通话时,能快速的对发出声音信号的用户进行追踪拍摄。

Description

一种追踪摄像方法、装置及终端设备 技术领域
本发明属于智能音箱技术领域,尤其涉及一种追踪摄像方法、装置及终端设备。
背景技术
随着智能音箱的发展,其具有了越来越多的功能,视频通话便是其中之一。
然而,现有的音箱大多只有一个摄像头,且不能转动,即使智能音箱上设有带有云台的摄像头,云台的转动速度也很慢,无法满足用户通过智能音箱进行多人视频通话中对人物的追踪拍摄的需求,使用感差。
技术问题
有鉴于此,本发明实施例提供了一种追踪摄像方法、装置及终端设备,以解决现有的音箱大多只有一个摄像头,且不能转动。即使智能音箱上设有带有云台的摄像头,云台的转动速度也很慢,无法满足用户通过智能音箱进行多人视频通话中对人物的追踪拍摄的需求的问题。
技术解决方案
本发明实施例的第一方面提供了一种追踪摄像方法,包括:
若检测到声音信号,则定位所述声音信号的声源位置;
根据所述声源位置调用摄像头;
控制所述摄像头拍摄所述声源位置的图像;
对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;
控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
本发明实施例的第二方面提供了一种追踪摄像装置,包括:
定位模块,用于若检测到声音信号,则定位所述声音信号的声源位置;
调用模块,用于根据所述声源位置调用摄像头;
第一控制模块,用于控制所述摄像头拍摄所述声源位置的图像;
确定模块,用于对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;
第二控制模块,用于控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
本发明实施例的第三方面提供了一种追踪摄像终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述方法的步骤。
本发明实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述方法的步骤。
有益效果
本发明实施例检测声音信号,定位声源位置并控制摄像头拍摄声源位置的图像,对图像进行人脸识别确定发出声音信号的人脸,再控制摄像头对发出声音信号的人脸进行追踪拍摄,在进行多人视频通话时,能快速的对发出声音信号的用户进行追踪拍摄。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例一提供的追踪摄像方法的流程示意图;
图2是本发明实施例二提供的追踪摄像方法的流程示意图;
图3是本发明实施例三提供的追踪摄像方法的流程示意图;
图4是本发明实施例四提供的追踪摄像装置的结构示意图;
图5是本发明实施例五提供的调用模块的结构示意图;
图6是本发明实施例六提供的确定模块的结构示意图;
图7是本发明实施例七提供的终端设备的示意图。
本发明的实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含一系列步骤或单元的过程、方法或系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。此外,术语“第一”、“第二”和“第三”等是用于区别不同对象,而非用于描述特定顺序。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
实施例一
如图1所示,本实施例提供一种追踪摄像方法,该方法可以应用于如智能音箱、手机、平板电脑等智能终端设备。本实施例所提供的追踪摄像方法,包括:
S101、若检测到声音信号,则定位所述声音信号的声源位置。
在具体应用中,如果检测到了声音信号,那么可以通过声音定位技术定位声音信号的声源位置。例如,可以通过TDOA(Time Difference 0f Arrival,到达时间差)定位技术进行定位。其中,TDOA定位是一种利用时间差进行定位的方法。通过测量信号到达监测站的时间,可以确定信号源的距离。
在一个实施例中,所述定位所述声音信号的声源位置之前,包括:
判断所述声音信号是否为人声;
若所述声音信号是人声,则执行定位所述声音信号的声源位置的操作。
在具体应用中,检测到的声音信号中对应的声音不一定是由人体发出的声音,有可能是动物或其他发声物体发出的声音。如果通过麦克风等通话装置检测到了声音信号,则可以通过语音识别技术、红外感应技术、图像识别技术等多种技术判断该声音信号中对应的声音是否为人声,用于判定是不是人体在发出声音。如果是人体在发出声音,则可以通过对声音信号的声源位置进行定位,来定位发出声音的人体的位置。
S102、根据所述声源位置调用摄像头。
在具体应用中,定位到声音信号的声源位置之后,根据声源位置调用满足条件的摄像头,来对发出声音信号的用户进行拍摄。
S103、控制所述摄像头拍摄所述声源位置的图像。
在具体应用中,控制摄像头拍摄包括声源位置在内的一定范围内的图像,其中,可以对以声源位置为中心,根据用户实际设定的距离为半径的范围拍摄。
S104、对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸。
在具体应用中,对拍摄到的图像进行人脸识别,判断图像中拍摄到的人脸是否是发出声音信号的人脸,从而确定发出声音信号的用户。
S105、控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
在具体应用中,控制摄像头对发出声音信号的用户进行拍摄,并在拍摄过程中,对发出声音信号的用户进行追踪,直至发出声音信号的用户的发声结束。
在一个实施例中,步骤S105之后,还包括:
若接收到所述摄像头拍摄的所述发出所述声音信号的人脸的影像数据,则对所述影像数据进行降采样处理;
将降采样处理之后的多个所述影像数据合成为目标影像数据,并将所述目标影像数据传输至与当前设备通信的其他设备。
在具体应用中,如果接收到摄像头拍摄的发出声音信号的人脸的影像数据,则对影像数据进行处理;然后对所有影像数据的图层进行合成,合成为目标的影像数据,然后对目标影像数据进行压缩处理,并通过网络传输到进行视频通话的用户对端。其中,降采样处理是指降低特定信号的采样率的过程,通常用于降低数据传输速率或者数据大小,其他设备是指与当前设备相同或不同的任意具有通信功能的设备。
本实施例通过检测声音信号,定位声源位置并控制摄像头拍摄声源位置的图像,对图像进行人脸识别确定发出声音信号的人脸,再控制摄像头对发出声音信号的人脸进行追踪拍摄,在进行多人视频通话时,能快速的对发出声音信号的用户进行追踪拍摄。
实施例二
如图2所示,本实施例是对实施例一中的方法步骤的进一步说明。在本实施例中,步骤S102,包括:
S1021、判断是否有未执行跟踪任务的摄像头。
在具体应用中,确定声源位置之后,通过检测多个摄像头中,有没有未执行跟踪任务的摄像头,即摄像头在此次通话中到目前从未执行过跟踪任务,此次通话中到目前为止该摄像头的使用率为0,若没有,则检测多个摄像头中,有没有摄像头目前不处于正在执行跟踪任务的状态。
S1022、若有未执行跟踪任务的摄像头,则检测所有未执行跟踪任务的摄像头与所述声源位置之间的距离。
在具体应用中,如果有未执行跟踪任务的摄像头,那么检测所有在此次通话中到目前从未执行过跟踪任务或目前不处于正在执行跟踪任务的状态的摄像头与声源位置之间的距离信息。
S1023、调用所有未执行跟踪任务的摄像头中与所述声源位置之间距离最近的摄像头。
在具体应用中,检测所有未执行跟踪任务的摄像头与声源位置之间的距离信息之后,将所有距离信息进行排序,调用所有未执行跟踪任务的摄像头中与声源位置之间距离最近的摄像头
在一个实施例中,步骤S1021,包括:
S10211、若没有未执行跟踪任务的摄像头,则判断所有摄像头中的每个摄像头的使用频率是否小于频率阈值;
S10212、若有使用频率小于频率阈值的摄像头,则调用使用频率小于频率阈值的摄像头。
在一个实施例中,步骤S1021,还包括:
S10213、若没有使用频率小于频率阈值的摄像头,则检测所有摄像头与所述声源位置之间的距离;
S10214、调用与所述声源位置之间距离最近的摄像头。
在具体应用中,如果不存在未执行跟踪任务的摄像头,那么检测所有摄像头中每个摄像头的使用频率,并进行判断。如果有使用频率小于预先设定的频率阈值的摄像头,则调用使用频率小于频率阈值的摄像头。如果没有使用频率小于预先设定的频率阈值的摄像头,则检测所有摄像头与声源位置之间的距离,选择所有摄像头中与声源位置距离最近的摄像头,对发出声音信号的的用户进行追踪拍摄。其中,频率阈值是指用户根据实际情况,设置的摄像头在视频通话中平均使用频率中的最低值。
本实施例通过增设多个云台,各云台上设置多个摄像头,并对摄像头的使用频率及与声源位置的距离信息,进行判定并调用目前进行追踪拍摄的最佳摄像头,有效地提高了摄像头的调用速度及效率,提高了摄像头进行追踪摄像的实用性。
实施例三
如图3所示,本实施例是对实施例一中的方法步骤的进一步说明。在本实施例中,步骤S104,包括:
S1041、对所述图像进行人脸识别,确定所述图像中的人脸数量。
在具体应用中,对于摄像头拍摄到的图像进行人脸识别,根据人脸识别结果,确定摄像头所拍摄的图像中的人脸数量。
S1042、若所述图像中的人脸数量等于一,则确定所述图像中的人脸为发出所述声音信号的人脸。
在具体应用中,如果图像中识别出的人脸数量等于一,那么可以确定摄像头拍摄的图像中的人脸是发出声音信号的人脸,即发出声音信号的的用户。
S1043、若所述图像中的人脸数量大于一,则定位所述图像中各人脸的嘴部。
如果摄像头拍摄的图像中的人脸数量大于一,图像中的人里只有一个是发出声音信号的用户,此时定位图像中的每个人脸图像的嘴部。
S1044、根据定位所述图像中人脸的嘴部动作,确定所述图像中发出所述声音信号的人脸。
在具体应用中,定位图像中各个人脸的嘴部,可根据每个人脸的嘴部的动作,判断人脸的嘴部是不是在进行发出声音信号,从而确定哪一个人脸是发出声音信号的人脸。
本实施例通过根据人脸识别及嘴部动作的识别,能够有效地判定当前的发出声音信号的用户,并用摄像头锁定用户进行追踪拍摄,提升了追踪拍摄过程中视频通话的完整性和清晰性。
实施例四
如图4所示,本实施例提供一种追踪摄像装置100,用于执行实施例一中的方法步骤。本实施例提供的追踪摄像装置100,包括:
定位模块101,用于若检测到声音信号,则定位所述声音信号的声源位置;
调用模块102,用于根据所述声源位置调用摄像头;
第一控制模块103,用于控制所述摄像头拍摄所述声源位置的图像;
确定模块104,用于对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;
第二控制模块105,用于控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
在一个实施例中,追踪摄像装置100,还包括:
处理模块,用于若接收到所述摄像头拍摄的所述发出所述声音信号的人脸的影像数据,则对所述影像数据进行处理。
传输模块,用于合成多个所述影像数据为目标影像数据,并对所述目标影像数据进行传输。
在一个实施例中,追踪摄像装置100,还包括:
判断模块,用于判断所述声音信号是否为人声;
执行模块,用于若所述声音信号是人声,则执行定位所述声音信号的声源位置的操作。
本实施例通过检测声音信号,定位声源位置并控制摄像头拍摄声源位置的图像,对图像进行人脸识别确定发出声音信号的人脸,再控制摄像头对发出声音信号的人脸进行追踪拍摄,在进行多人视频通话时,能快速的对发出声音信号的用户进行追踪拍摄。
实施例五
如图5所示,在本实施例中,实施例三中的调用模块102,包括:
第一判断单元1021,用于判断是否有未执行跟踪任务的摄像头;
第一检测单元1022,用于若有未执行跟踪任务的摄像头,则检测所有未执行跟踪任务的摄像头与所述声源位置之间的距离;
第一调用单元1023,用于调用所有未执行跟踪任务的摄像头中与所述声源位置之间距离最近的摄像头。
在一个实施例中,调用模块102,还包括:
第二判断单元,用于若没有未执行跟踪任务的摄像头,则判断所有摄像头中的每个摄像头的使用频率是否小于频率阈值;
第二调用单元,用于若有使用频率小于频率阈值的摄像头,则调用使用频率小于频率阈值的摄像头。
在一个实施例中,调用模块102,还包括:
第二检测单元,用于若没有使用频率小于频率阈值的摄像头,则检测所有摄像头与所述声源位置之间的距离;
第三调用单元,用于调用与所述声源位置之间距离最近的摄像头。
本实施例通过增设多个云台,各云台上设置多个摄像头,并对摄像头的使用频率及与声源位置的距离信息,进行判定并调用目前进行追踪拍摄的最佳摄像头,有效地提高了摄像头的调用速度及效率,提高了摄像头进行追踪摄像的实用性。
实施例六
如图6所示,在本实施例中,实施例三中的确定模块104,包括:
第一确定单元1041,用于对所述图像进行人脸识别,确定所述图像中的人脸数量。
第二确定单元1042,用于若所述图像中的人脸数量等于一,则确定所述图像中的人脸为发出所述声音信号的人脸。
定位单元1043,用于若所述图像中的人脸数量大于一,则定位所述图像中各人脸的嘴部。
第三确定单元1044,用于根据定位所述图像中人脸的嘴部动作,确定所述图像中发出所述声音信号的人脸。
本实施例通过根据人脸识别及嘴部动作的识别,能够有效地判定当前的发出声音信号的用户,并用摄像头锁定用户进行追踪拍摄,提升了追踪拍摄过程中视频通话的完整性和清晰性。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
实施例七
图7是本发明一实施例提供的追踪摄像终端设备的示意图。如图7所示,该实施例的追踪摄像终端设备7包括:处理器70、存储器71以及存储在所述存储器71中并可在所述处理器70上运行的计算机程序72,例如追踪摄像程序。所述处理器70执行所述计算机程序72时实现上述各个追踪摄像方法实施例中的步骤,例如图1所示的步骤S101至S105。或者,所述处理器70执行所述计算机程序72时实现上述各装置实施例中各模块/单元的功能,例如图4所示模块101至105的功能。
示例性的,所述计算机程序72可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器71中,并由所述处理器70执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序72在所述追踪摄像终端设备7中的执行过程。例如,所述计算机程序72可以被分割成定位模块、调用模块、第一控制模块、确定模块、第二控制模块,各模块具体功能如下:
定位模块,用于若检测到声音信号,则定位所述声音信号的声源位置;
调用模块,用于根据所述声源位置调用摄像头;
第一控制模块,用于控制所述摄像头拍摄所述声源位置的图像;
确定模块,用于对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;
第二控制模块,用于控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
所述追踪摄像终端设备7可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述追踪摄像终端设备可包括,但不仅限于,处理器70、存储器71。本领域技术人员可以理解,图7仅仅是追踪摄像终端设备7的示例,并不构成对追踪摄像终端设备7的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述追踪摄像终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器70可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器71可以是所述追踪摄像终端设备7的内部存储单元,例如追踪摄像终端设备7的硬盘或内存。所述存储器71也可以是所述追踪摄像终端设备7的外部存储设备,例如所述追踪摄像终端设备7上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字卡(Secure Digital,SD),闪存卡(Flash Card)等。进一步地,所述存储器71还可以既包括所述追踪摄像终端设备7的内部存储单元也包括外部存储设备。所述存储器71用于存储所述计算机程序以及所述追踪摄像终端设备所需的其他程序和数据。所述存储器71还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。

Claims (12)

  1. 一种追踪摄像方法,其特征在于,包括:
    若检测到声音信号,则定位所述声音信号的声源位置;
    根据所述声源位置调用摄像头;
    控制所述摄像头拍摄所述声源位置的图像;
    对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;
    控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
  2. 如权利要求1所述的追踪摄像方法,其特征在于,根据所述声源位置调用摄像头,包括:
    判断是否有未执行跟踪任务的摄像头;
    若有未执行跟踪任务的摄像头,则检测所有未执行跟踪任务的摄像头与所述声源位置之间的距离;
    调用所有未执行跟踪任务的摄像头中与所述声源位置之间距离最近的摄像头。
  3. 如权利要求2所述的追踪摄像方法,其特征在于,所述判断是否有未执行跟踪任务的摄像头之后,包括:
    若没有未执行跟踪任务的摄像头,则判断所有摄像头中的每个摄像头的使用频率是否小于频率阈值;
    若有使用频率小于频率阈值的摄像头,则调用使用频率小于频率阈值的摄像头。
  4. 如权利要求2所述的追踪摄像方法,其特征在于,所述判断是否有未执行跟踪任务的摄像头之后,包括:
    若没有使用频率小于频率阈值的摄像头,则检测所有摄像头与所述声源位置之间的距离;
    调用与所述声源位置之间距离最近的摄像头。
  5. 如权利要求1所述的追踪摄像方法,其特征在于,对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸,包括:
    对所述图像进行人脸识别,确定所述图像中的人脸数量;
    若所述图像中的人脸数量等于一,则确定所述图像中的人脸为发出所述声音信号的人脸;
    若所述图像中的人脸数量大于一,则定位所述图像中各人脸的嘴部;
    根据定位所述图像中人脸的嘴部动作,确定所述图像中发出所述声音信号的人脸。
  6. 如权利要求1所述的追踪摄像方法,其特征在于,定位所述声音信号的声源位置之前,包括:
    判断所述声音信号是否为人声;
    若所述声音信号是人声,则执行定位所述声音信号的声源位置的操作。
  7. 一种追踪摄像装置,其特征在于,包括:
    定位模块,用于若检测到声音信号,则定位所述声音信号的声源位置;
    调用模块,用于根据所述声源位置调用摄像头;
    第一控制模块,用于控制所述摄像头拍摄所述声源位置的图像;
    确定模块,用于对所述图像进行人脸识别,确定所述图像中发出所述声音信号的人脸;
    第二控制模块,用于控制所述摄像头对所述发出所述声音信号的人脸进行追踪拍摄。
  8. 如权利要求7所述的追踪摄像装置,其特征在于,所述调用模块,包括:
    第一判断单元,用于判断是否有未执行跟踪任务的摄像头;
    第一检测单元,用于若有未执行跟踪任务的摄像头,则检测所有未执行跟踪任务的摄像头与所述声源位置之间的距离;
    第一调用单元,用于调用所有未执行跟踪任务的摄像头中与所述声源位置之间距离最近的摄像头。
  9. 如权利要求7所述的追踪摄像装置,其特征在于,所述调用模块,还包括:
    第二判断单元,用于若没有未执行跟踪任务的摄像头,则判断所有摄像头中的每个摄像头的使用频率是否小于频率阈值;
    第二调用单元,用于若有使用频率小于频率阈值的摄像头,则调用使用频率小于频率阈值的摄像头.
  10. 如权利要求7所述的追踪摄像装置,其特征在于,所述调用模块,还包括:
    第二检测单元,用于若没有使用频率小于频率阈值的摄像头,则检测所有摄像头与所述声源位置之间的距离;
    第三调用单元,用于调用与所述声源位置之间距离最近的摄像头。
  11. 一种追踪摄像终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至6任一项所述方法的步骤。
  12. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述方法的步骤。
PCT/CN2019/107878 2018-08-13 2019-09-25 一种追踪摄像方法、装置及终端设备 WO2020035080A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810917532.XA CN110830708A (zh) 2018-08-13 2018-08-13 一种追踪摄像方法、装置及终端设备
CN201810917532.X 2018-08-13

Publications (1)

Publication Number Publication Date
WO2020035080A1 true WO2020035080A1 (zh) 2020-02-20

Family

ID=69525212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/107878 WO2020035080A1 (zh) 2018-08-13 2019-09-25 一种追踪摄像方法、装置及终端设备

Country Status (2)

Country Link
CN (1) CN110830708A (zh)
WO (1) WO2020035080A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476126A (zh) * 2020-03-27 2020-07-31 海信集团有限公司 一种室内定位方法、系统及智能设备
CN112365522A (zh) * 2020-10-19 2021-02-12 中标慧安信息技术股份有限公司 园区内人员跨境追踪的方法
CN116489502A (zh) * 2023-05-12 2023-07-25 深圳星河创意科技开发有限公司 基于ai摄像头拓展坞的远程会议方法与ai摄像头拓展坞

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432115B (zh) * 2020-03-12 2021-12-10 浙江大华技术股份有限公司 基于声音辅助定位的人脸追踪方法、终端及存储装置
CN113411487B (zh) * 2020-03-17 2023-08-01 中国电信股份有限公司 设备的控制方法、装置、系统和计算机可读存储介质
CN112104810A (zh) * 2020-07-28 2020-12-18 苏州触达信息技术有限公司 全景拍照装置、全景拍照方法和计算机可读存储介质
CN112367473A (zh) * 2021-01-13 2021-02-12 北京电信易通信息技术股份有限公司 一种基于声纹到达相位的可旋转摄像装置及其控制方法
CN114257742B (zh) * 2021-12-15 2024-05-03 惠州视维新技术有限公司 云台摄像头的控制方法、存储介质及云台摄像头
CN115278083A (zh) * 2022-07-29 2022-11-01 歌尔科技有限公司 一种安防设备的控制方法、安防设备及存储介质
CN116980744B (zh) * 2023-09-25 2024-01-30 深圳市美高电子设备有限公司 基于特征的摄像头追踪方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572282A (zh) * 2012-01-06 2012-07-11 鸿富锦精密工业(深圳)有限公司 智能追踪装置
TW201330609A (zh) * 2012-01-06 2013-07-16 Hon Hai Prec Ind Co Ltd 智慧追蹤裝置
CN105338238A (zh) * 2014-08-08 2016-02-17 联想(北京)有限公司 一种拍照方法及电子设备
CN105357442A (zh) * 2015-11-27 2016-02-24 小米科技有限责任公司 摄像头拍摄角度调整方法及装置
US20160286133A1 (en) * 2013-09-29 2016-09-29 Zte Corporation Control Method, Control Device, and Control Equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI502583B (zh) * 2013-04-11 2015-10-01 Wistron Corp 語音處理裝置和語音處理方法
CN104284150A (zh) * 2014-09-23 2015-01-14 同济大学 基于道路交通监控的智能摄像头自主协同跟踪方法及其监控系统
CN104580992B (zh) * 2014-12-31 2018-01-23 广东欧珀移动通信有限公司 一种控制方法及移动终端
CN106157956A (zh) * 2015-03-24 2016-11-23 中兴通讯股份有限公司 语音识别的方法及装置
CN105116920B (zh) * 2015-07-07 2018-07-10 百度在线网络技术(北京)有限公司 基于人工智能的智能机器人追踪方法、装置及智能机器人
CN106161985B (zh) * 2016-07-05 2019-08-27 宁波菊风系统软件有限公司 一种浸入式视频会议的实现方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572282A (zh) * 2012-01-06 2012-07-11 鸿富锦精密工业(深圳)有限公司 智能追踪装置
TW201330609A (zh) * 2012-01-06 2013-07-16 Hon Hai Prec Ind Co Ltd 智慧追蹤裝置
US20160286133A1 (en) * 2013-09-29 2016-09-29 Zte Corporation Control Method, Control Device, and Control Equipment
CN105338238A (zh) * 2014-08-08 2016-02-17 联想(北京)有限公司 一种拍照方法及电子设备
CN105357442A (zh) * 2015-11-27 2016-02-24 小米科技有限责任公司 摄像头拍摄角度调整方法及装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476126A (zh) * 2020-03-27 2020-07-31 海信集团有限公司 一种室内定位方法、系统及智能设备
CN111476126B (zh) * 2020-03-27 2024-02-23 海信集团有限公司 一种室内定位方法、系统及智能设备
CN112365522A (zh) * 2020-10-19 2021-02-12 中标慧安信息技术股份有限公司 园区内人员跨境追踪的方法
CN116489502A (zh) * 2023-05-12 2023-07-25 深圳星河创意科技开发有限公司 基于ai摄像头拓展坞的远程会议方法与ai摄像头拓展坞
CN116489502B (zh) * 2023-05-12 2023-10-31 深圳星河创意科技开发有限公司 基于ai摄像头拓展坞的远程会议方法与ai摄像头拓展坞

Also Published As

Publication number Publication date
CN110830708A (zh) 2020-02-21

Similar Documents

Publication Publication Date Title
WO2020035080A1 (zh) 一种追踪摄像方法、装置及终端设备
JP6785908B2 (ja) カメラ撮影制御方法、装置、インテリジェント装置および記憶媒体
CN106295566B (zh) 人脸表情识别方法及装置
EP2509070B1 (en) Apparatus and method for determining relevance of input speech
EP4113451A1 (en) Map construction method and apparatus, repositioning method and apparatus, storage medium, and electronic device
CN109032039B (zh) 一种语音控制的方法及装置
WO2020082902A1 (zh) 视频的音效处理方法及相关产品
CN110222789B (zh) 图像识别方法及存储介质
US8908911B2 (en) Redundant detection filtering
CN112689221B (zh) 录音方法、录音装置、电子设备及计算机可读存储介质
CN107944381B (zh) 人脸跟踪方法、装置、终端及存储介质
CN107688781A (zh) 人脸识别方法及装置
CN111815666B (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
US10893230B2 (en) Dynamically switching cameras in web conference
US20230306780A1 (en) Image capturing system and network system to support privacy mode
CN113744750B (zh) 一种音频处理方法及电子设备
CN111696570A (zh) 语音信号处理方法、装置、设备及存储介质
CN113157246A (zh) 音量调节方法、装置、电子设备及存储介质
CN109981964A (zh) 基于机器人的拍摄方法、拍摄装置及机器人
WO2020107267A1 (zh) 一种图像特征点匹配方法及装置
CN114038452A (zh) 一种语音分离方法和设备
CN109981970B (zh) 一种确定拍摄场景的方法、装置和机器人
CN112990424A (zh) 神经网络模型训练的方法和装置
US20240096342A1 (en) Processing apparatus and processing method of sound signal
US11095867B1 (en) Saliency prediction using part affinity fields in videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19850494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19850494

Country of ref document: EP

Kind code of ref document: A1