WO2017088378A1 - 摄像头拍摄角度调整方法及装置 - Google Patents

摄像头拍摄角度调整方法及装置 Download PDF

Info

Publication number
WO2017088378A1
WO2017088378A1 PCT/CN2016/082692 CN2016082692W WO2017088378A1 WO 2017088378 A1 WO2017088378 A1 WO 2017088378A1 CN 2016082692 W CN2016082692 W CN 2016082692W WO 2017088378 A1 WO2017088378 A1 WO 2017088378A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
face
video
module
shooting angle
Prior art date
Application number
PCT/CN2016/082692
Other languages
English (en)
French (fr)
Inventor
王阳
傅强
侯恩星
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to KR1020167027675A priority Critical patent/KR20180078106A/ko
Priority to RU2016142694A priority patent/RU2695104C2/ru
Priority to JP2016562261A priority patent/JP6441958B2/ja
Publication of WO2017088378A1 publication Critical patent/WO2017088378A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Definitions

  • the invention relates to the field of smart homes, in particular to a method and a device for adjusting a shooting angle of a camera.
  • the shooting angle of the smart camera is fixed during use, and when the user wants to adjust the shooting angle of the smart camera, the shooting angle of the smart camera can be manually adjusted by the control device of the smart camera.
  • the invention provides a camera shooting angle adjustment method and device.
  • the technical solution is as follows:
  • a camera shooting angle adjustment method comprising:
  • Fine-adjusting the shooting angle of the camera according to the face position so that the image of the face position is at the center of the video image captured by the camera.
  • the determining the position of the human body within the cameraable range of the camera includes:
  • the location where the sound source is located is the human body position.
  • the determining the position of the human body within the cameraable range of the camera includes:
  • the detection result is that the spectral characteristic of the infrared signal matches the preset spectrum, determining that the location of the infrared signal source is the human body position.
  • the determining a face position in the video image captured by the camera includes:
  • the position of the recognized face as the face of the specified expression in the video picture is determined as the face position.
  • the determining a face position in the video image captured by the camera includes:
  • the position of the face indicated by the selection indication information in the video screen is determined as the face position.
  • the method further includes:
  • the camera When it is detected that the distance of the face image from the center of the video image is greater than a preset distance, the camera is further fine-tuned according to the position of the face image, so that the face image is in the The center of the video picture captured by the camera.
  • a camera shooting angle adjusting device comprising:
  • a first positioning module configured to determine a human body position within a camera detectable range
  • the first adjustment module is configured to adjust a shooting angle of the camera, so that an image at the human body position determined by the first positioning module is at a center of a video image captured by the camera;
  • a second positioning module configured to determine a face position in a video frame captured by the camera
  • the second adjustment module is configured to finely adjust the shooting angle of the camera according to the face position determined by the second positioning module, so that the image of the face position is in a video frame captured by the camera center.
  • the first positioning module includes:
  • a sound collection sub-module configured to acquire a sound signal emitted by a sound source within a camera-capable range
  • the frequency band detection sub-module is configured to detect whether a frequency band of the sound signal collected by the sound collection sub-module is within a preset frequency range
  • the location where the sound source is located is the human body position.
  • the first positioning module further includes:
  • An infrared collection sub-module configured to acquire an infrared signal emitted by an infrared signal source within a camera range of the camera;
  • a spectrum monitoring sub-module configured to detect whether a spectral feature of the infrared signal collected by the infrared collection sub-module matches a preset spectrum
  • the detection result is that the spectral feature of the infrared signal collected by the infrared collection sub-module matches the preset spectrum, determining that the location of the infrared signal source is the human body position.
  • the second positioning module includes:
  • An expression recognition sub-module configured to perform expression recognition on at least one of the faces of the video captured by the camera
  • the first positioning sub-module is configured to determine, as the face position, a position of the face recognized by the expression recognition sub-module as a face of the specified expression in the video picture.
  • the second positioning module further includes:
  • a screen pushing submodule configured to push the video screen to a user terminal connected to the camera
  • the information receiving sub-module is configured to receive the selection indication information returned by the user terminal, where the selection indication information is used to indicate a human face included in the video screen;
  • a second positioning submodule configured to determine, by the information receiving submodule, a location of the face indicated by the selection indication information in the video screen as the face position.
  • the device further includes:
  • the distance monitoring module is configured to monitor, after fine adjustment of the shooting angle of the camera according to the face position, a distance of a face image corresponding to the face position from a center of the video image captured by the camera;
  • the second adjustment module is further configured to, when the distance monitoring module detects that the distance of the face image from the center of the video image is greater than a preset distance, re-according to the location of the face image The shooting angle of the camera is fine-tuned so that the face image is at the center of the video frame captured by the camera.
  • a camera shooting angle adjusting device comprising:
  • a memory for storing executable instructions of the processor
  • processor is configured to:
  • Fine-adjusting the shooting angle of the camera according to the face position so that the image of the face position is at the center of the video image captured by the camera.
  • the camera After determining the position of the human body in the photographable range and adjusting the camera shooting angle to the human body position, the camera is finely adjusted according to the position of the face, and the camera angle is automatically adjusted according to the face position, and the user does not need to manually adjust. It achieves the difficulty of reducing user operation and improves the accuracy of shooting angle adjustment.
  • FIG. 1 is a flowchart of a camera shooting angle adjustment method according to an exemplary embodiment
  • FIG. 2A is a flowchart of a camera shooting angle adjustment method according to another exemplary embodiment
  • FIG. 2B is a flowchart illustrating determining a position of a human body within a camera-capable range according to another exemplary embodiment
  • 2C is a flow chart showing another determination of a human body position within a camera-capable range, according to another exemplary embodiment
  • FIG. 2D is a flowchart illustrating determining a face position according to an exemplary embodiment
  • FIG. 2E is a schematic diagram of a camera shooting angle adjustment according to another exemplary embodiment
  • 2F is another flow chart showing determining a face position according to another exemplary embodiment
  • FIG. 2G is a schematic diagram of another camera shooting angle adjustment according to another exemplary embodiment
  • FIG. 3A is a flowchart of a camera shooting angle adjustment method according to another exemplary embodiment
  • FIG. 3B is a schematic diagram of detecting a face offset distance according to another exemplary embodiment
  • FIG. 3C is a schematic diagram of a face position adjustment according to another exemplary embodiment
  • FIG. 4 is a block diagram of a camera shooting angle adjusting apparatus according to an exemplary embodiment
  • FIG. 5 is a block diagram of a camera shooting angle adjusting device according to another exemplary embodiment
  • FIG. 6 is a block diagram of an apparatus, according to an exemplary embodiment.
  • FIG. 1 is a flowchart of a camera shooting angle adjustment method according to an exemplary embodiment.
  • the camera shooting angle adjustment method is used in a smart camera. As shown in FIG. 1, the camera shooting angle adjustment method may include the following steps.
  • step 101 the position of the human body within the camera's viewable range is determined.
  • step 102 the shooting angle of the camera is adjusted such that the image at the human body position is at the center of the video image captured by the camera.
  • step 103 the face position in the video picture captured by the camera is determined.
  • step 104 the camera's shooting angle is fine-tuned according to the face position, so that the image of the face position is at the center of the video image captured by the camera.
  • the determining the position of the human body within the camera range of the camera includes:
  • the position of the sound source is the human body position.
  • the determining the position of the human body within the camera range of the camera includes:
  • the detection result is that the spectral characteristic of the infrared signal matches the preset spectrum, it is determined that the location of the infrared signal source is the human body position.
  • determining the location of the face in the video frame captured by the camera includes:
  • the position where the recognized face is the face of the specified expression in the video picture is determined as the face position.
  • determining the location of the face in the video frame captured by the camera includes:
  • selection indication information where the selection indication information is used to indicate a face included in the video screen
  • the position of the face indicated by the selection indication information in the video screen is determined as the face position.
  • the method further includes:
  • the camera When it is detected that the distance of the face image from the center of the video image is greater than the preset distance, the camera is further fine-tuned according to the position of the face image, so that the face image is in the video captured by the camera. The center of the picture.
  • the camera shooting angle adjustment method shown in the embodiment of the present invention after determining the position of the human body in the photographable range, and adjusting the camera shooting angle to the human body position, fine-tuning the camera according to the position of the face, automatically according to The face position is finely adjusted by the camera shooting angle, and does not require manual adjustment by the user, thereby reducing the difficulty of the user operation and improving the accuracy of the shooting angle adjustment.
  • FIG. 2A is a flowchart of a camera shooting angle adjustment method according to another exemplary embodiment.
  • the camera shooting angle adjustment method is used in a smart camera. As shown in FIG. 2A, the camera shooting angle adjustment method may include the following steps.
  • step 201 the position of the human body within the photographable range of the camera is determined.
  • FIG. 2B is a flowchart of determining a human body position within a camera-capable range provided by an embodiment of the present invention.
  • the method may include:
  • Step 201a Acquire a sound signal emitted by a sound source within a camera range.
  • step 201b it is detected whether the frequency band of the sound signal is within the preset frequency range.
  • step 201c if the frequency band in which the detection result is the sound signal is within the preset frequency range, it is determined that the position where the sound source is located is the human body position.
  • the smart camera can determine the position of the human body based on the sound signal emitted by the sound source within the detectable range.
  • the intelligent camera has a sound signal collecting device therein. During a preset period of time, the sound signal collecting device in the smart camera collects the sound signal emitted by the sound source, and averages the sound signal collected during the preset time.
  • the specific time of the preset time is not limited, and can be set to 30s, or it can be set to 1min. If the average value is within a preset frequency range, it is determined that the location of the sound source is the human body position.
  • the preset frequency band range should be the frequency range of the normal human communication, for example, the frequency range of the normal human voice conversation is 130-350 Hz.
  • FIG. 2C is a flowchart illustrating another method for determining a human body within a camera-capable range provided by an embodiment of the present invention.
  • the method may include:
  • Step 201d Acquire an infrared signal emitted by an infrared signal source within a camera range.
  • Step 201e Detect whether the spectral characteristic of the infrared signal matches the preset spectrum.
  • step 201f if the detection result is that the spectral characteristic of the infrared signal matches the preset spectrum, it is determined that the position of the infrared signal source is the human body position.
  • the smart camera can determine the position of the human body through the spectral characteristics of the infrared signal.
  • the spectral characteristics of the infrared signal may include the frequency of the infrared signal and the wavelength of the infrared signal. Different types of objects have different spectral characteristics of the infrared signals.
  • the spectrum features of the human body can be set as preset spectral features, and subsequent preset spectral features are detected. When the object is matched, it can be determined that the object is a human body.
  • step 202 the shooting angle of the camera is adjusted.
  • the image at the human body position can be placed at the center of the video image captured by the camera.
  • a micro motor is disposed inside the smart camera. After the camera determines the position of the human body, the micro motor in the camera starts to work, and the shooting angle of the camera is adjusted so that the image at the human body position is located at the center of the captured video image.
  • step 203 the face position in the video picture captured by the camera is determined by face recognition.
  • the camera determines the position of the face in the video picture through face recognition.
  • the smart camera is based on human facial features in the video picture, such as ears, eyes, mouth and nose, etc., based on the shape of the facial features and the geometric relationship between them to determine the position of the face.
  • step 204 the camera's shooting angle is fine-tuned according to the face position.
  • the camera directly makes the image of the face position at the center of the video image captured by the camera.
  • one of the face positions may be determined, and subsequent fine adjustment is performed according to the determined face position, wherein multiple faces exist in the video frame.
  • Time determination The method of a face position can be as follows:
  • FIG. 2D is a flowchart of determining a face position according to an embodiment of the present invention.
  • the method may include:
  • Step 203a Perform expression recognition on at least one face in the video image captured by the camera.
  • the recognized expression is the position of the face of the specified expression in the video screen as the face position.
  • the face to be captured by the camera can be determined by the method of expression recognition.
  • the smile expression is preset to the specified expression.
  • the camera fine-tunes the shooting angle so that the face with the smiling expression is located at the center of the video frame.
  • FIG. 2E Please refer to the schematic diagram of the first camera shooting angle adjustment shown in FIG. 2E.
  • the shooting screen 20 shown in FIG. 2E there are three human faces, namely faces 20a, 20b and 20c respectively, and the camera detects one of the faces 20c.
  • To smile the expression fine-tune the shooting angle so that the face 20c with a smiling expression is at the center of the video frame.
  • FIG. 2F illustrates another flowchart for determining a face position according to an embodiment of the present invention.
  • the method may include:
  • step 203c the video screen is pushed to the user terminal connected to the camera.
  • Step 203d Receive selection indication information returned by the user terminal, where the selection indication information is used to indicate a face included in the video screen.
  • step 203e the position of the face indicated by the selection indication information in the video screen is determined as the face position.
  • the camera may also push the captured video image to the user terminal connected to the camera, and the user may select a face to be photographed, and when the camera receives the user terminal and returns The selection indication message determines the position of the face selected by the selection indication message in the video screen as the face position.
  • the photographing screen shown in FIG. 2G is a photographing screen that the camera transmits to the user terminal and displays in the user terminal, and the photographing screen has A, After three faces of B and C, after the user clicks on the area where one of the faces C is located, the user terminal sends the coordinate information of the user click to the camera. After receiving the coordinate information, the camera determines that the user selects the face C according to the coordinate information, and thereafter The camera adjusts the shooting angle so that the face image of the face C is at the center of the captured video frame.
  • the camera shooting angle adjustment method shown in the embodiment of the present invention after determining the position of the human body in the photographable range, and adjusting the camera shooting angle to the human body position, fine-tuning the camera according to the position of the face, automatically according to The face position is finely adjusted by the camera shooting angle, and does not require manual adjustment by the user, thereby reducing the difficulty of the user operation and improving the accuracy of the shooting angle adjustment.
  • the position of the human body within the photographable range can be determined according to a preset frequency band or time spectrum, and the accurate and effective tracking can be performed.
  • the human body in the shooting range so that the camera's shooting angle can be adjusted to the position of the human body, and other animals or objects can be determined as the human body position.
  • the face to be photographed by the camera is determined; or, the camera The head pushes the captured video image to the user terminal connected to the camera, and receives the display instruction message of the user terminal to determine the face to be photographed by the user, thereby finely adjusting the shooting angle, so that when there are multiple faces in the detectable range, Adjust the shooting angle according to the user's needs, and display the face that the user needs to shoot in the center of the video screen.
  • FIG. 3A is a flowchart of a camera shooting angle adjustment method according to still another exemplary embodiment.
  • the camera shooting angle adjustment method is used in a smart camera. As shown in FIG. 3A, the camera shooting angle adjustment method may include the following steps.
  • step 301 the position of the human body within the camera's viewable range is determined.
  • step 302 the shooting angle of the camera is adjusted.
  • step 303 the face position in the video picture captured by the camera is determined by face recognition.
  • step 304 the camera's shooting angle is fine-tuned according to the face position.
  • step 305 after fine-tuning the shooting angle of the camera according to the face position, the distance of the face image corresponding to the face position from the center of the video image captured by the camera is monitored.
  • the camera faces the face image corresponding to the face position from the video.
  • the distance in the center of the screen is detected.
  • the center point of the video screen 30 taken by the camera is used as a starting point, and the distance from the center point of the video screen 30 to the center of the face image 31 is calculated, and the distance from the center of the video screen 30 to the center of the face image 31 is calculated.
  • the distance of the face image corresponding to the face position deviates from the center of the video picture captured by the camera.
  • step 306 when it is detected that the distance of the face image from the center of the video image is greater than the preset distance, the camera is further fine-tuned according to the position of the face image, so that the face image is in the The center of the video frame captured by the camera.
  • the camera can specify the preset distance as the radius by the center of the video image taken by the camera, and when the distance from the video center to the center of the face image is greater than the preset specified distance, the center of the face image is located in the video screen.
  • the center of the circle is the center of the circle, and when the preset distance is outside the circle of the radius, the camera's shooting angle is finely adjusted according to the position of the face image.
  • FIG. 3C a schematic diagram of the face position adjustment shown in FIG. 3C.
  • the distance from the center of the face image 33 to the center of the video screen 32 is greater than the preset distance in FIG. 3C, that is, the center of the face image 33 is at the center of the center of the video screen 32, and the circle 34 is specified as the radius of the preset distance.
  • the shooting angle of the camera is finely adjusted, and the face image 33 is readjusted to the center position of the video screen 32.
  • the camera when the camera detects that the distance from the center of the face image to the center of the video screen is greater than the preset distance, the camera does not immediately adjust the shooting angle, but presets for a period of time (for example, the preset time may be 5 seconds or 10 seconds). ), when the distance from the center of the face image to the center of the video screen is greater than the preset distance for more than the preset time, the camera will re-root Fine-tune the shooting angle according to the position of the face image.
  • the preset time for example, the preset time may be 5 seconds or 10 seconds.
  • the distance and the preset distance of the face image from the center of the video screen are not limited, and the developer or the user may set the method according to the actual use.
  • the camera shooting angle adjustment method shown in the embodiment of the present invention after determining the position of the human body in the photographable range, and adjusting the camera shooting angle to the human body position, fine-tuning the camera according to the position of the face, automatically according to The face position is finely adjusted by the camera shooting angle, and does not require manual adjustment by the user, thereby reducing the difficulty of the user operation and improving the accuracy of the shooting angle adjustment.
  • the position of the human body within the photographable range can be determined according to a preset frequency band or time spectrum, and the accurate and effective tracking can be performed.
  • the human body in the shooting range so that the camera's shooting angle can be adjusted to the position of the human body, and other animals or objects can be determined as the human body position.
  • the face to be photographed by the camera is determined by recognizing the expression of the face within the photographable range; or the camera pushes the captured video image to the user terminal connected to the camera, and determines the user by receiving the display instruction message of the user terminal.
  • the face to be photographed thereby fine-tuning the shooting angle, so that when there are multiple faces in the detectable range, the shooting angle is adjusted according to the user's needs, and the face that the user needs to shoot is displayed in the middle of the video screen.
  • the shooting angle is re-adjusted so that the face position is at the center of the video image, thereby For the best results.
  • FIG. 4 is a block diagram of a camera shooting angle adjusting device that can be used in a smart camera according to an exemplary embodiment.
  • the camera shooting angle adjusting device includes, but is not limited to, a first positioning module 401 , a first adjusting module 402 , a second positioning module 403 , and a second adjusting module 404 .
  • the first positioning module 401 is configured to determine a human body position within a camera detectable range
  • the first adjustment module 402 is configured to adjust the shooting angle of the camera such that the image at the human body position determined by the first positioning module 401 is at the center of the video image captured by the camera;
  • the second positioning module 403 is configured to determine a face position in a video frame captured by the camera
  • the second adjustment module 404 is configured to finely adjust the shooting angle of the camera according to the face position determined by the second positioning module 403, so that the image of the face position is at the center of the video image captured by the camera.
  • the camera shooting angle adjusting device shown in the embodiment of the present invention automatically adjusts the camera according to the position of the human face by determining the position of the human body within the photographable range and adjusting the camera shooting angle to the human body position.
  • the face position is finely adjusted by the camera shooting angle, and does not require manual adjustment by the user, thereby reducing the difficulty of the user operation and improving the accuracy of the shooting angle adjustment.
  • FIG. 5 is a block diagram of a camera shooting angle adjusting device, which may be used in a smart camera, according to another exemplary embodiment.
  • the camera shooting angle adjusting device includes, but is not limited to, a first positioning module 501 , a first adjusting module 502 , a second positioning module 503 , and a second adjusting module 504 .
  • the first positioning module 501 is configured to determine a human body position within a camera detectable range
  • the first adjustment module 502 is configured to adjust the shooting angle of the camera such that the image at the human body position determined by the first positioning module 501 is at the center of the video image captured by the camera;
  • the second positioning module 503 is configured to determine a face position in a video frame captured by the camera
  • the second adjustment module 504 is configured to finely adjust the shooting angle of the camera according to the face position determined by the second positioning module 503, so that the image of the face position is at the center of the video image captured by the camera.
  • the first positioning module 501 includes: a sound collection submodule 501a and a frequency band detection submodule 501b.
  • the sound collection sub-module 501a is configured to collect a sound signal emitted by a sound source within a camera-capable range
  • the frequency band detecting sub-module 501b is configured to detect whether a frequency band of the sound signal collected by the sound collecting sub-module 501a is within a preset frequency range;
  • the detection result is that the frequency band of the sound signal collected by the sound collection sub-module 501a is within the preset frequency range, it is determined that the position of the sound source is the human body position.
  • the first positioning module 501 further includes: an infrared collection submodule 501c and a spectrum monitoring submodule 501d.
  • the infrared collection sub-module 501c is configured to collect an infrared signal emitted by an infrared signal source within a camera range;
  • the spectrum monitoring sub-module 501d is configured to detect whether the spectral feature of the infrared signal collected by the infrared collection sub-module 501c matches the preset spectrum;
  • the detection result is that the spectrum characteristic of the infrared signal collected by the infrared collection sub-module 501c matches the preset spectrum, it is determined that the position of the infrared signal source is the human body position.
  • the second positioning module 503 includes: an expression recognition submodule 503a and a first positioning submodule 503b.
  • the expression recognition sub-module 503a is configured to perform expression recognition on at least one of the faces of the video captured by the camera;
  • the first positioning sub-module 503b is configured to determine the position of the face recognized by the expression recognition sub-module 503a as the face of the specified expression in the video picture as the face position.
  • the second positioning module 503 further includes: a screen pushing submodule 503c, an information receiving submodule 503d, and a second positioning submodule 503e.
  • the screen pushing sub-module 503c is configured to push a video screen to a user terminal connected to the camera;
  • the information receiving sub-module 503d is configured to receive the selection indication information returned by the user terminal, and the selection indication information is used to indicate a human face included in the video screen;
  • the second positioning sub-module 503e is configured to determine a position of the face indicated by the selection indication information received by the information receiving sub-module 503d in the video screen as a face position.
  • the device further includes: a distance monitoring module 505.
  • the distance monitoring module 505 is configured to monitor a distance of a face image corresponding to the face position from a center of the video image captured by the camera after fine-tuning the shooting angle of the camera according to the face position;
  • the second adjustment module 504 is further configured to finely adjust the shooting angle of the camera according to the position of the face image when the distance monitoring module 505 detects that the distance of the face image from the center of the video screen is greater than the preset distance.
  • the face image is at the center of the video frame captured by the camera.
  • the camera shooting angle adjusting device shown in the embodiment of the present invention automatically adjusts the camera according to the position of the human face by determining the position of the human body within the photographable range and adjusting the camera shooting angle to the human body position.
  • the face position is finely adjusted by the camera shooting angle, and does not require manual adjustment by the user, thereby reducing the difficulty of the user operation and improving the accuracy of the shooting angle adjustment.
  • the position of the human body within the photographable range can be determined according to a preset frequency band or time spectrum, and the accurate and effective tracking can be performed.
  • the human body in the shooting range so that the camera's shooting angle can be adjusted to the position of the human body, and other animals or objects can be determined as the human body position.
  • the face to be photographed by the camera is determined by recognizing the expression of the face within the photographable range; or the camera pushes the captured video image to the user terminal connected to the camera, and determines the user by receiving the display instruction message of the user terminal.
  • the face to be photographed thereby fine-tuning the shooting angle, so that when there are multiple faces in the detectable range, the shooting angle is adjusted according to the user's needs, and the face that the user needs to shoot is displayed in the middle of the video screen.
  • the shooting angle is re-adjusted so that the face position is at the center of the video image, thereby For the best results.
  • An exemplary embodiment of the present invention further provides a camera shooting angle adjusting device, which can implement the camera shooting angle adjusting method provided by the present invention.
  • the apparatus includes a processor and a memory for storing executable instructions of the processor.
  • processor is configured to:
  • Fine-adjusting the shooting angle of the camera according to the face position so that the image of the face position is at the center of the video image captured by the camera.
  • the determining the position of the human body within the cameraable range of the camera includes:
  • the position is the body position.
  • the determining the position of the human body within the cameraable range of the camera includes:
  • the detection result is that the spectral characteristic of the infrared signal matches the preset spectrum, determining that the location of the infrared signal source is the human body position.
  • the determining a face position in the video image captured by the camera includes:
  • the position of the recognized face as the face of the specified expression in the video picture is determined as the face position.
  • the determining a face position in the video image captured by the camera includes:
  • the position of the face indicated by the selection indication information in the video screen is determined as the face position.
  • the processor is further configured to:
  • the camera When it is detected that the distance of the face image from the center of the video image is greater than a preset distance, the camera is further fine-tuned according to the position of the face image, so that the face image is in the The center of the video picture captured by the camera.
  • FIG. 6 is a block diagram of an apparatus 600, according to an exemplary embodiment.
  • device 600 can be a smart camera.
  • device 600 can include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, sensor component 614, and communication component 616.
  • Processing component 602 typically controls the overall operation of device 600, such as operations associated with display, data communication, camera operations, and recording operations, and the like.
  • Processing component 602 can include one or more processors 618 to execute instructions to perform all or part of the steps described above.
  • processing component 602 can include one or more modules to facilitate interaction between component 602 and other components.
  • processing component 602 can include a multimedia module to facilitate interaction between multimedia component 608 and processing component 602.
  • Memory 604 is configured to store various types of data to support operation at device 600. Examples of such data include instructions for any application or method operating on device 600.
  • the memory 604 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk. Also stored in the memory 604 There are one or more modules that are configured to be executed by the one or more processors 618 to perform all or part of the steps of any of the methods illustrated in Figures 1, 2A, or 3A above.
  • Power component 606 provides power to various components of device 600.
  • Power component 606 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 600.
  • the multimedia component 608 includes a screen between the device 600 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor can sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the audio component 610 is configured to output and/or input an audio signal.
  • audio component 610 includes a microphone (MIC) that is configured to receive an external audio signal when device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 604 or transmitted via communication component 616.
  • audio component 610 also includes a speaker for outputting an audio signal.
  • Sensor assembly 614 includes one or more sensors for providing device 600 with a status assessment of various aspects.
  • sensor assembly 614 can detect an open/closed state of device 600, relative positioning of components, and sensor assembly 614 can also detect changes in position of one component of device 600 or device 600 and temperature changes of device 600.
  • the sensor assembly 614 can also include a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 616 is configured to facilitate wired or wireless communication between device 600 and other devices.
  • the device 600 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 616 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 616 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • device 600 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a memory 604 comprising instructions executable by processor 618 of apparatus 600 to perform the above method.
  • the non-transitory computer readable storage medium can be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

一种摄像头拍摄角度调整方法及装置,属于智能家居领域。所述方法包括:确定摄像头可拍摄范围内的人体位置;对所述摄像头的拍摄角度进行调整,使所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;确定所述摄像头拍摄到的视频画面中的人脸位置;根据所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。

Description

摄像头拍摄角度调整方法及装置
本申请基于申请号为201510849310.5、申请日为2015/11/27的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及智能家居领域,特别涉及一种摄像头拍摄角度调整方法及装置。
背景技术
随着智能设备的普及以及网络技术的飞速发展,智能摄像头在人们的日常生活中也越来越普及,广泛应用于安防、自动控制以及远程视频互动等场景。
在相关技术中,智能摄像头在使用过程中,其拍摄角度固定,当用户想要调整智能摄像头的拍摄角度时,可以通过智能摄像头的控制设备对其拍摄角度进行手动调整。
发明内容
本发明提供了一种摄像头拍摄角度调整方法及装置。所述技术方案如下:
根据本发明的第一方面,提供一种摄像头拍摄角度调整方法,所述方法包括:
确定摄像头可拍摄范围内的人体位置;
对所述摄像头的拍摄角度进行调整,使所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
确定所述摄像头拍摄到的视频画面中的人脸位置;
根据所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
可选地,所述确定摄像头可拍摄范围内的人体位置,包括:
采集所述摄像头可拍摄范围内的声源发出的声音信号;
检测所述声音信号的频段是否处于预设频段范围之内;
若检测结果为所述声音信号的频段处于所述预设频段范围之内,则确定所述声源所在的位置为所述人体位置。
可选地,所述确定摄像头可拍摄范围内的人体位置,包括:
采集所述摄像头可拍摄范围内的红外信号源发出的红外信号;
检测所述红外信号的频谱特征是否与预设频谱相匹配;
若检测结果为所述红外信号的频谱特征与所述预设频谱相匹配,则确定所述红外信号源所在的位置为所述人体位置。
可选地,所述确定所述摄像头拍摄到的视频画面中的人脸位置,包括:
对所述摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
将识别出的表情为指定表情的人脸在所述视频画面中的位置确定为所述人脸位置。
可选地,所述确定所述摄像头拍摄到的视频画面中的人脸位置,包括:
向与所述摄像头相连接的用户终端推送所述视频画面;
接收所述用户终端返回的选择指示信息,所述选择指示信息用于指示所述视频画面中包含的人脸;
将所述选择指示信息所指示的人脸在所述视频画面中的位置确定为所述人脸位置。
可选地,所述方法还包括:
在根据所述人脸位置对所述摄像头的拍摄角度进行微调之后,监测所述人脸位置对应的人脸图像偏离所述摄像头拍摄的视频画面的中心的距离;
当检测到所述人脸图像偏离所述视频画面的中心的距离大于预设距离时,重新根据所述人脸图像的位置对所述摄像头的拍摄角度进行微调,使所述人脸图像处于所述摄像头拍摄到的视频画面的中心。
根据本发明的第二方面,提供一种摄像头拍摄角度调整装置,所述装置包括:
第一定位模块,被配置为确定摄像头可拍摄范围内的人体位置;
第一调整模块,被配置为对所述摄像头的拍摄角度进行调整,使所述第一定位模块确定的所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
第二定位模块,被配置为确定所述摄像头拍摄到的视频画面中的人脸位置;
第二调整模块,被配置为根据所述第二定位模块确定的所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
可选地,所述第一定位模块,包括:
声音采集子模块,被配置为采集所述摄像头可拍摄范围内的声源发出的声音信号;
频段检测子模块,被配置为检测所述声音采集子模块采集的所述声音信号的频段是否处于预设频段范围之内;
若检测结果为所述声音采集子模块采集的所述声音信号的频段处于所述预设频段范围之内,则确定所述声源所在的位置为所述人体位置。
可选地,所述第一定位模块,还包括:
红外采集子模块,被配置为采集所述摄像头可拍摄范围内的红外信号源发出的红外信号;
频谱监测子模块,被配置为检测所述红外线采集子模块采集的所述红外信号的频谱特征是否与预设频谱相匹配;
若检测结果为所述红外采集子模块采集的所述红外信号的频谱特征与所述预设频谱相匹配,则确定所述红外信号源所在的位置为所述人体位置。
可选地,所述第二定位模块,包括:
表情识别子模块,被配置为对所述摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
第一定位子模块,用于将所述表情识别子模块识别出的表情为指定表情的人脸在所述视频画面中的位置确定为所述人脸位置。
可选地,所述第二定位模块,还包括:
画面推送子模块,被配置为向与所述摄像头相连接的用户终端推送所述视频画面;
信息接收子模块,被配置为接收所述用户终端返回的选择指示信息,所述选择指示信息用于指示所述视频画面中包含的人脸;
第二定位子模块,用于将所述信息接收子模块接收的所述选择指示信息所指示的人脸在所述视频画面中的位置确定为所述人脸位置。
可选地,所述装置,还包括:
距离监测模块,被配置为在根据所述人脸位置对所述摄像头的拍摄角度进行微调之后,监测所述人脸位置对应的人脸图像偏离所述摄像头拍摄的视频画面的中心的距离;
所述第二调整模块,还被配置为当所述距离监测模块检测到所述人脸图像偏离所述视频画面的中心的距离大于预设距离时,重新根据所述人脸图像的位置对所述摄像头的拍摄角度进行微调,使所述人脸图像处于所述摄像头拍摄到的视频画面的中心。
根据本发明实施例的第三方面,提供一种摄像头拍摄角度调整装置,所述装置包括:
处理器;
用于存储所述处理器的可执行指令的存储器;
其中,所述处理器被配置为:
确定摄像头可拍摄范围内的人体位置;
对所述摄像头的拍摄角度进行调整,使所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
确定所述摄像头拍摄到的视频画面中的人脸位置;
根据所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
本发明的实施例提供的技术方案可以包括以下有益效果:
通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,再根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本发明。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例, 并于说明书一起用于解释本发明的原理。
图1是根据一示例性实施例示出的一种摄像头拍摄角度调整方法的流程图;
图2A是根据另一示例性实施例示出的一种摄像头拍摄角度调整方法的流程图;
图2B是根据另一示例性实施例示出的一种确定摄像头可拍摄范围内的人体位置的流程图;
图2C是根据另一示例性实施例示出的另一种确定摄像头可拍摄范围内的人体位置的流程图;
图2D是根据一示例性实施例示出的一种确定人脸位置的流程图;
图2E是根据另一示例性实施例示出的一种摄像头拍摄角度调整示意图;
图2F是根据另一示例性实施例示出的另一种确定人脸位置的流程图;
图2G是根据另一示例性实施例示出的另一种摄像头拍摄角度调整示意图;
图3A是根据另一示例性实施例示出的一种摄像头拍摄角度调整方法的流程图;
图3B是根据另一示例性实施例示出的一种检测人脸偏移距离的示意图;
图3C是根据另一示例性实施例示出的一种人脸位置调整示意图;
图4是根据一示例性实施例示出的一种摄像头拍摄角度调整装置的框图;
图5是根据另一示例性实施例示出的一种摄像头拍摄角度调整装置的框图;
图6是根据一示例性实施例示出的一种装置的框图。
具体实施方式
这里将详细地对示例性实施例执行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1是根据一示例性实施例示出的一种摄像头拍摄角度调整方法的流程图。该摄像头拍摄角度调整方法用于智能摄像头中。如图1所示,该摄像头拍摄角度调整方法可以包括以下步骤。
在步骤101中,确定摄像头可拍摄范围内的人体位置。
在步骤102中,对该摄像头的拍摄角度进行调整,使该人体位置处的图像处于该摄像头拍摄到的视频画面的中心。
在步骤103中,确定该摄像头拍摄到的视频画面中的人脸位置。
在步骤104中,根据该人脸位置对该摄像头的拍摄角度进行微调,使该人脸位置的图像处于该摄像头拍摄到的视频画面的中心。
可选地,该确定摄像头可拍摄范围内的人体位置,包括:
采集该摄像头可拍摄范围内的声源发出的声音信号;
检测该声音信号的频段是否处于预设频段范围之内;
若检测结果为该声音信号的频段处于该预设频段范围之内,则确定该声源所在的位置为该人体位置。
可选地,该确定摄像头可拍摄范围内的人体位置,包括:
采集该摄像头可拍摄范围内的红外信号源发出的红外信号;
检测该红外信号的频谱特征是否与预设频谱相匹配;
若检测结果为该红外信号的频谱特征与该预设频谱相匹配,则确定该红外信号源所在的位置为该人体位置。
可选地,该确定该摄像头拍摄到的视频画面中的人脸位置,包括:
对该摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
将识别出的表情为指定表情的人脸在该视频画面中的位置确定为该人脸位置。
可选地,该确定该摄像头拍摄到的视频画面中的人脸位置,包括:
向与该摄像头相连接的用户终端推送该视频画面;
接收该用户终端返回的选择指示信息,该选择指示信息用于指示该视频画面中包含的人脸;
将该选择指示信息所指示的人脸在该视频画面中的位置确定为该人脸位置。
可选地,该方法还包括:
在根据该人脸位置对该摄像头的拍摄角度进行微调之后,监测该人脸位置对应的人脸图像偏离该摄像头拍摄的视频画面的中心的距离;
当检测到该人脸图像偏离该视频画面的中心的距离大于预设距离时,重新根据该人脸图像的位置对该摄像头的拍摄角度进行微调,使该人脸图像处于该摄像头拍摄到的视频画面的中心。
综上所述,本发明实施例所示的摄像头拍摄角度调整方法,通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。
图2A是根据另一示例性实施例示出的一种摄像头拍摄角度调整方法的流程图。该摄像头拍摄角度调整方法用于智能摄像头中。如图2A所示,该摄像头拍摄角度调整方法可以包括以下步骤。
在步骤201中,确定摄像头可拍摄范围内的人体位置。
在一个可能的实现方式中,请参考图2B,其示出了本发明实施例提供的一种确定摄像头可拍摄范围内的人体位置的流程图,该方法可以包括:
步骤201a,采集摄像头可拍摄范围内的声源发出的声音信号。
步骤201b,检测声音信号的频段是否处于预设频段范围之内。
步骤201c,若检测结果为声音信号的频段处于预设频段范围之内,则确定声源所在的位置为人体位置。
其中,智能摄像头可以根据可拍摄范围内声源发出的声音信号来确定人体的位置。智能摄像头内部设有声音信号采集装置,在预设的一段时间内,智能摄像头内的声音信号采集装置采集声源发出的声音信号,并在该预设时间内采集的声音信号取平均值,该预设时间的具体时间不作限制,可以设置为30s,或者,也可设置为1min。若该平均值在预设的频段范围内,则确定该声源所在的位置为人体位置。具体地,由于确定的是人体的位置,则该预设的频段范围就应该是正常人交流的频段范围,例如:正常人声交谈的频段范围为130-350Hz。
在另一个可能的实现方式中,请参考图2C,其示出了本发明实施例提供的另一种确定摄像头可拍摄范围内的人体位置的流程图,该方法可以包括:
步骤201d,采集摄像头可拍摄范围内的红外信号源发出的红外信号。
步骤201e,检测红外信号的频谱特征是否与预设频谱相匹配。
步骤201f,若检测结果为红外信号的频谱特征与预设频谱相匹配,则确定红外信号源所在的位置为人体位置。
智能摄像头除了根据可拍摄的范围内声源发出的声音信号确定人体位置外还可以通过红外信号的频谱特征来来确定人体的位置。其中,红外信号的频谱特征可以包括红外信号的频率和红外信号的波长等。不同类型的物体,其发出的红外信号的频谱特征也不同,在本实施例一种可能的实施方式中,可以将人体的频谱特征设置为预设频谱特征,后续检测到与该预设频谱特征相匹配的物体时,就可以确定该物体是人体。
在步骤202中,对该摄像头的拍摄角度进行调整。
在对摄像头的拍摄角度进行调整时,可以使该人体位置处的图像处于该摄像头拍摄到的视频画面的中心。
具体地,在智能摄像头内部设有一个微型电动机,当摄像头确定人体位置之后,摄像头内的微型电动机开始工作,调整摄像头的拍摄角度,使得该人体位置处的图像位于拍摄到的视频画面的中心。
在步骤203中,通过人脸识别来确定该摄像头拍摄到的视频画面中的人脸位置。
在确定人体位置之后,为了取得更好的拍摄效果,摄像头通过人脸识别来确定视频画面中人脸的位置。具体地,智能摄像头基于视频画面中人体面部特征,如:耳朵、眼睛、嘴巴和鼻子等,根据这些面部特征的形状以及它们之间的几何关系来确定人脸的位置。
在步骤204中,根据该人脸位置对该摄像头的拍摄角度进行微调。
其中,当摄像头拍摄到的视频画面中只存在一个人脸时,在对拍摄角度进行微调时,摄像头直接使人脸位置的图像处于摄像头拍摄到的视频画面的中心即可。
可选地,当该摄像头拍摄到的视频画面中存在多个人脸时,可以确定其中的一个人脸位置,并根据确定的人脸位置进行后续的微调,其中,在视频画面中存在多个人脸时确定 一个人脸位置的方法可以如下:
在一个可能的实现方式中,请参考图2D,其示出了本发明实施例提供的一种确定人脸位置的流程图,该方法可以包括:
步骤203a,对摄像头拍摄到的视频画面中的至少一个人脸进行表情识别。
步骤203b,将识别出的表情为指定表情的人脸在视频画面中的位置确定为人脸位置。
当视频画面中出现多个人脸时,可以通过表情识别的方法来确定摄像头所要拍摄的人脸。如:预先设定微笑表情为指定表情。当摄像头拍摄到的视频画面中的多个人脸中,有一个人脸是微笑表情,则摄像头对拍摄角度进行微调,使该带有微笑表情的人脸位于视频画面的中心。请参考图2E所示的第一种摄像头拍摄角度调整的示意图,其中,图2E所示的拍摄画面20中存在3个人脸,分别为人脸20a、20b和20c,摄像头检测到其中一个人脸20c为微笑表情,则对拍摄角度进行微调,使该带有微笑表情的人脸20c位于视频画面的中心。
在一个可能的实现方式中,请参考图2F,其示出了本发明实施例提供的另一种确定人脸位置的流程图,该方法可以包括:
步骤203c,向与摄像头相连接的用户终端推送视频画面。
步骤203d,接收用户终端返回的选择指示信息,选择指示信息用于指示视频画面中包含的人脸。
步骤203e,将选择指示信息所指示的人脸在视频画面中的位置确定为人脸位置。
可选地,当视频画面中出现多个人脸时,摄像头还可以将拍摄到的视频画面推送给与该摄像头相连接的用户终端,用户可以选择所要拍摄的人脸,当摄像头接收到用户终端返回的选择指示消息,就将该选择指示消息选择的人脸在视频画面中的位置确定为人脸位置。请参考图2G所示的另一种摄像头拍摄角度调整的示意图,其中,图2G所示的拍摄画面是摄像头传输给用户终端,并在用户终端中显示的拍摄画面,该拍摄画面中存在A、B和C三个人脸,用户点击其中一个人脸C所在区域后,用户终端向摄像头发送用户点击的坐标信息,摄像头接收到该坐标信息后,根据该坐标信息确定用户选择了人脸C,此后摄像头调整拍摄角度,使人脸C的人脸图像处于拍摄到的视频画面的中心。
综上所述,本发明实施例所示的摄像头拍摄角度调整方法,通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。
另外,利用音源或者红外检测的方法,通过采集可拍摄范围内物体发出的声音信号或者是红外信号,根据预设的频段或者时频谱确定可拍摄范围内人体的位置,可以准确有效地追踪到可拍摄范围内的人体,从而可以调整摄像头的拍摄角度至人体所在位置,避免将其他的动物或者物体确定为人体位置,
另外,通过识别可拍摄范围内人脸的表情,确定摄像头所要拍摄的人脸;或者,摄像 头将拍摄到的视频画面推送给与摄像头连接的用户终端,通过接收用户终端的显示指示消息,确定用户所要拍摄的人脸,从而微调拍摄角度,便于在可拍摄范围内有多个人脸时,根据用户的需求调整拍摄角度,将用户需要拍摄的人脸显示在视频画面中心。
图3A是根据又一示例性实施例示出的一种摄像头拍摄角度调整方法的流程图。该摄像头拍摄角度调整方法用于智能摄像头中。如图3A所示,该摄像头拍摄角度调整方法可以包括以下步骤。
在步骤301中,确定摄像头可拍摄范围内的人体位置。
在步骤302中,对该摄像头的拍摄角度进行调整。
在步骤303中,通过人脸识别来确定该摄像头拍摄到的视频画面中的人脸位置。
在步骤304中,根据该人脸位置对该摄像头的拍摄角度进行微调。
其中,上述步骤301至步骤304的实现过程可以参考上述图2所示实施例中,步骤201至步骤204下的描述,此处不再赘述。
在步骤305中,在根据该人脸位置对该摄像头的拍摄角度进行微调之后,监测该人脸位置对应的人脸图像偏离该摄像头拍摄的视频画面的中心的距离。
由于视频过程中,人脸位置是一直在变化的,为了防止人脸位置超出摄像头的可拍摄范围,从而获得更好的保证摄像头拍摄的视频效果,摄像头对人脸位置对应的人脸图像偏离视频画面中心的距离进行检测。具体地,请参考图3B所示的一种检测人脸偏移距离的示意图。图3B中以摄像头拍摄的视频画面30的中心点作为起始点,计算出该视频画面30的中心点到人脸图像31中心的距离,将视频画面30的中心到人脸图像31的中心的距离作为人脸位置对应的人脸图像偏离该摄像头拍摄的视频画面的中心的距离。
在步骤306中,当检测到该人脸图像偏离该视频画面的中心的距离大于预设距离时,重新根据该人脸图像的位置对该摄像头的拍摄角度进行微调,使该人脸图像处于该摄像头拍摄到的视频画面的中心。
摄像头可以通过以摄像头拍摄的视频画面的中心为圆心,指定预设距离为半径,当检测到视频中心到人脸图像中心的距离大于预设指定距离时,即人脸图像的中心位于以视频画面的中心为圆心,指定预设距离为半径的圆外部时,重新根据人脸图像的位置对摄像头的拍摄角度进行微调。
具体地,请参考图3C所示的一种人脸位置调整示意图。图3C中检测到人脸图像33的中心到视频画面32的中心的距离大于预设距离,即人脸图像33的中心处于以视频画面32的中心为圆心,指定预设距离为半径的圆34的外部时,对摄像头的拍摄角度进行微调,将人脸图像33重新调整至视频画面32的中心位置。
优选的,当摄像头检测到人脸图像中心到视频画面中心的距离大于预设距离时,并不立刻调整拍摄角度,而是预设一段时间(比如,该预设时间可以为5秒或10秒),当人脸图像中心到视频画面中心的距离大于预设距离的时间超过预设时间时,摄像头才会重新根 据人脸图像的位置对拍摄角度进行微调。
需要注意的是,本实施例不对检测人脸图像偏离视频画面中心的距离和预设距离做出限定,开发人员或者用户可以根据实际使用的情况自行设定。
综上所述,本发明实施例所示的摄像头拍摄角度调整方法,通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。
另外,利用音源或者红外检测的方法,通过采集可拍摄范围内物体发出的声音信号或者是红外信号,根据预设的频段或者时频谱确定可拍摄范围内人体的位置,可以准确有效地追踪到可拍摄范围内的人体,从而可以调整摄像头的拍摄角度至人体所在位置,避免将其他的动物或者物体确定为人体位置,
另外,通过识别可拍摄范围内人脸的表情,确定摄像头所要拍摄的人脸;或者,摄像头将拍摄到的视频画面推送给与摄像头连接的用户终端,通过接收用户终端的显示指示消息,确定用户所要拍摄的人脸,从而微调拍摄角度,便于在可拍摄范围内有多个人脸时,根据用户的需求调整拍摄角度,将用户需要拍摄的人脸显示在视频画面中间。
此外,通过确定视频画面中心与人脸图像之间的距离,如果视频画面中心与人脸图像之间的距离大于预设距离,则重新调整拍摄角度,使得人脸位置位于视频画面的中心,从而达到最佳拍摄效果。
下述为本发明装置实施例,可以用于执行本发明方法实施例。对于本发明装置实施例中未披露的细节,请参照本发明方法实施例。
图4是根据一示例性实施例示出的一种摄像头拍摄角度调整装置的框图,该摄像头拍摄角度调整装置可以用于智能摄像头中。如图4所示,该摄像头拍摄角度调整装置包括但不限于:第一定位模块401、第一调整模块402、第二定位模块403和第二调整模块404。
该第一定位模块401,被配置为确定摄像头可拍摄范围内的人体位置;
该第一调整模块402,被配置为对该摄像头的拍摄角度进行调整,使该第一定位模块401确定的人体位置处的图像处于摄像头拍摄到的视频画面的中心;
该第二定位模块403,被配置为确定摄像头拍摄到的视频画面中的人脸位置;
该第二调整模块404,被配置为根据该第二定位模块403确定的人脸位置对摄像头的拍摄角度进行微调,使人脸位置的图像处于摄像头拍摄到的视频画面的中心。
综上所述,本发明实施例所示的摄像头拍摄角度调整装置,通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。
图5是根据另一示例性实施例示出的一种摄像头拍摄角度调整装置的框图,该摄像头拍摄角度调整装置可以用于智能摄像头中。如图5所示,该摄像头拍摄角度调整装置包括但不限于:第一定位模块501、第一调整模块502、第二定位模块503和第二调整模块504。
该第一定位模块501,被配置为确定摄像头可拍摄范围内的人体位置;
该第一调整模块502,被配置为对该摄像头的拍摄角度进行调整,使该第一定位模块501确定的人体位置处的图像处于摄像头拍摄到的视频画面的中心;
该第二定位模块503,被配置为确定摄像头拍摄到的视频画面中的人脸位置;
该第二调整模块504,被配置为根据该第二定位模块503确定的人脸位置对摄像头的拍摄角度进行微调,使人脸位置的图像处于摄像头拍摄到的视频画面的中心。
可选地,第一定位模块501,包括:声音采集子模块501a和频段检测子模块501b。
该声音采集子模块501a,被配置为采集摄像头可拍摄范围内的声源发出的声音信号;
该频段检测子模块501b,被配置为检测声音采集子模块501a采集的声音信号的频段是否处于预设频段范围之内;
若检测结果为声音采集子模块501a采集的声音信号的频段处于预设频段范围之内,则确定声源所在的位置为人体位置。
可选地,第一定位模块501,还包括:红外采集子模块501c和频谱监测子模块501d。
该红外采集子模块501c,被配置为采集摄像头可拍摄范围内的红外信号源发出的红外信号;
该频谱监测子模块501d,被配置为检测红外线采集子模块501c采集的红外信号的频谱特征是否与预设频谱相匹配;
若检测结果为红外采集子模块501c采集的红外信号的频谱特征与预设频谱相匹配,则确定红外信号源所在的位置为人体位置。
可选地,第二定位模块503,包括:表情识别子模块503a和第一定位子模块503b。
该表情识别子模块503a,被配置为对摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
该第一定位子模块503b,用于将表情识别子模块503a识别出的表情为指定表情的人脸在视频画面中的位置确定为人脸位置。
可选地,第二定位模块503,还包括:画面推送子模块503c、信息接收子模块503d和第二定位子模块503e。
该画面推送子模块503c,被配置为向与摄像头相连接的用户终端推送视频画面;
该信息接收子模块503d,被配置为接收用户终端返回的选择指示信息,选择指示信息用于指示视频画面中包含的人脸;
该第二定位子模块503e,用于将信息接收子模块503d接收的选择指示信息所指示的人脸在视频画面中的位置确定为人脸位置。
可选地,该装置还包括:距离监测模块505。
该距离监测模块505,被配置为在根据人脸位置对摄像头的拍摄角度进行微调之后,监测人脸位置对应的人脸图像偏离摄像头拍摄的视频画面的中心的距离;
该第二调整模块504,还被配置为当距离监测模块505检测到人脸图像偏离视频画面的中心的距离大于预设距离时,重新根据人脸图像的位置对摄像头的拍摄角度进行微调,使人脸图像处于摄像头拍摄到的视频画面的中心。
综上所述,本发明实施例所示的摄像头拍摄角度调整装置,通过确定可拍摄范围内人体的位置,并调整摄像头拍摄角度至人体位置之后,根据人脸的位置对摄像头进行微调,自动根据人脸位置进行摄像头拍摄角度的微调,不需要用户手动进行调节,达到降低用户操作难度,同时提高了拍摄角度调节准确性的效果。
另外,利用音源或者红外检测的方法,通过采集可拍摄范围内物体发出的声音信号或者是红外信号,根据预设的频段或者时频谱确定可拍摄范围内人体的位置,可以准确有效地追踪到可拍摄范围内的人体,从而可以调整摄像头的拍摄角度至人体所在位置,避免将其他的动物或者物体确定为人体位置,
另外,通过识别可拍摄范围内人脸的表情,确定摄像头所要拍摄的人脸;或者,摄像头将拍摄到的视频画面推送给与摄像头连接的用户终端,通过接收用户终端的显示指示消息,确定用户所要拍摄的人脸,从而微调拍摄角度,便于在可拍摄范围内有多个人脸时,根据用户的需求调整拍摄角度,将用户需要拍摄的人脸显示在视频画面中间。
此外,通过确定视频画面中心与人脸图像之间的距离,如果视频画面中心与人脸图像之间的距离大于预设距离,则重新调整拍摄角度,使得人脸位置位于视频画面的中心,从而达到最佳拍摄效果。
本发明一示例性实施例还提供了一种摄像头拍摄角度调整装置,能够实现本发明提供的摄像头拍摄角度调整方法。该装置包括:处理器,以及用于存储处理器的可执行指令的存储器。
其中,处理器被配置为:
确定摄像头可拍摄范围内的人体位置;
对所述摄像头的拍摄角度进行调整,使所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
确定所述摄像头拍摄到的视频画面中的人脸位置;
根据所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
可选地,所述确定摄像头可拍摄范围内的人体位置,包括:
采集所述摄像头可拍摄范围内的声源发出的声音信号;
检测所述声音信号的频段是否处于预设频段范围之内;
若检测结果为所述声音信号的频段处于所述预设频段范围之内,则确定所述声源所在 的位置为所述人体位置。
可选地,所述确定摄像头可拍摄范围内的人体位置,包括:
采集所述摄像头可拍摄范围内的红外信号源发出的红外信号;
检测所述红外信号的频谱特征是否与预设频谱相匹配;
若检测结果为所述红外信号的频谱特征与所述预设频谱相匹配,则确定所述红外信号源所在的位置为所述人体位置。
可选地,所述确定所述摄像头拍摄到的视频画面中的人脸位置,包括:
对所述摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
将识别出的表情为指定表情的人脸在所述视频画面中的位置确定为所述人脸位置。
可选地,所述确定所述摄像头拍摄到的视频画面中的人脸位置,包括:
向与所述摄像头相连接的用户终端推送所述视频画面;
接收所述用户终端返回的选择指示信息,所述选择指示信息用于指示所述视频画面中包含的人脸;
将所述选择指示信息所指示的人脸在所述视频画面中的位置确定为所述人脸位置。
可选地,处理器还被配置为:
在根据所述人脸位置对所述摄像头的拍摄角度进行微调之后,监测所述人脸位置对应的人脸图像偏离所述摄像头拍摄的视频画面的中心的距离;
当检测到所述人脸图像偏离所述视频画面的中心的距离大于预设距离时,重新根据所述人脸图像的位置对所述摄像头的拍摄角度进行微调,使所述人脸图像处于所述摄像头拍摄到的视频画面的中心。
图6是根据一示例性实施例示出的一种装置600的框图。例如,装置600可以是智能摄像机。
参照图6,装置600可以包括以下一个或多个组件:处理组件602,存储器604,电源组件606,多媒体组件608,音频组件610,传感器组件614以及通信组件616。
处理组件602通常控制装置600的整体操作,诸如与显示,数据通信,相机操作以及记录操作相关联的操作等。处理组件602可以包括一个或多个处理器618来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件602可以包括一个或多个模块,便于处理组件602和其他组件之间的交互。例如,处理组件602可以包括多媒体模块,以方便多媒体组件608和处理组件602之间的交互。
存储器604被配置为存储各种类型的数据以支持在装置600的操作。这些数据的示例包括用于在装置600上操作的任何应用程序或方法的指令。存储器604可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。存储器604中还存储 有一个或多个模块,该一个或多个模块被配置成由该一个或多个处理器618执行,以完成上述图1、2A或图3A任一所示方法中的全部或者部分步骤。
电源组件606为装置600的各种组件提供电力。电源组件606可以包括电源管理系统,一个或多个电源,及其他与为装置600生成、管理和分配电力相关联的组件。
多媒体组件608包括在该装置600和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。该触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与该触摸或滑动操作相关的持续时间和压力。
音频组件610被配置为输出和/或输入音频信号。例如,音频组件610包括一个麦克风(MIC),当装置600处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器604或经由通信组件616发送。在一些实施例中,音频组件610还包括一个扬声器,用于输出音频信号。
传感器组件614包括一个或多个传感器,用于为装置600提供各个方面的状态评估。例如,传感器组件614可以检测到装置600的打开/关闭状态,组件的相对定位,传感器组件614还可以检测装置600或装置600一个组件的位置改变以及装置600的温度变化。在一些实施例中,该传感器组件614还可以包括磁传感器,压力传感器或温度传感器。
通信组件616被配置为便于装置600和其他设备之间有线或无线方式的通信。装置600可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件616经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,该通信组件616还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置600可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器604,上述指令可由装置600的处理器618执行以完成上述方法。例如,该非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中执行了详细描述,此处将不做详细阐述说明。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围执行各种修改和改变。本发明的范围仅由所附的权利要求来限制。

Claims (13)

  1. 一种摄像头拍摄角度调整方法,其特征在于,所述方法包括:
    确定摄像头可拍摄范围内的人体位置;
    对所述摄像头的拍摄角度进行调整,使所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
    确定所述摄像头拍摄到的视频画面中的人脸位置;
    根据所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
  2. 根据权利要求1所述的方法,其特征在于,所述确定摄像头可拍摄范围内的人体位置,包括:
    采集所述摄像头可拍摄范围内的声源发出的声音信号;
    检测所述声音信号的频段是否处于预设频段范围之内;
    若检测结果为所述声音信号的频段处于所述预设频段范围之内,则确定所述声源所在的位置为所述人体位置。
  3. 根据权利要求1所述的方法,其特征在于,所述确定摄像头可拍摄范围内的人体位置,包括:
    采集所述摄像头可拍摄范围内的红外信号源发出的红外信号;
    检测所述红外信号的频谱特征是否与预设频谱相匹配;
    若检测结果为所述红外信号的频谱特征与所述预设频谱相匹配,则确定所述红外信号源所在的位置为所述人体位置。
  4. 根据权利要求1所述的方法,其特征在于,所述确定所述摄像头拍摄到的视频画面中的人脸位置,包括:
    对所述摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
    将识别出的表情为指定表情的人脸在所述视频画面中的位置确定为所述人脸位置。
  5. 根据权利要求1所述的方法,其特征在于,所述确定所述摄像头拍摄到的视频画面中的人脸位置,包括:
    向与所述摄像头相连接的用户终端推送所述视频画面;
    接收所述用户终端返回的选择指示信息,所述选择指示信息用于指示所述视频画面中包含的人脸;
    将所述选择指示信息所指示的人脸在所述视频画面中的位置确定为所述人脸位置。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在根据所述人脸位置对所述摄像头的拍摄角度进行微调之后,监测所述人脸位置对应的人脸图像偏离所述摄像头拍摄的视频画面的中心的距离;
    当检测到所述人脸图像偏离所述视频画面的中心的距离大于预设距离时,重新根据所述人脸图像的位置对所述摄像头的拍摄角度进行微调,使所述人脸图像处于所述摄像头拍摄到的视频画面的中心。
  7. 一种摄像头拍摄角度调整装置,其特征在于,所述装置包括:
    第一定位模块,被配置为确定摄像头可拍摄范围内的人体位置;
    第一调整模块,被配置为对所述摄像头的拍摄角度进行调整,使所述第一定位模块确定的所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
    第二定位模块,被配置为确定所述摄像头拍摄到的视频画面中的人脸位置;
    第二调整模块,被配置为根据所述第二定位模块确定的所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
  8. 根据权利要求7所述的装置,其特征在于,所述第一定位模块,包括:
    声音采集子模块,被配置为采集所述摄像头可拍摄范围内的声源发出的声音信号;
    频段检测子模块,被配置为检测所述声音采集子模块采集的所述声音信号的频段是否处于预设频段范围之内;
    若检测结果为所述声音采集子模块采集的所述声音信号的频段处于所述预设频段范围之内,则确定所述声源所在的位置为所述人体位置。
  9. 根据权利要求7所述的装置,其特征在于,所述第一定位模块,还包括:
    红外采集子模块,被配置为采集所述摄像头可拍摄范围内的红外信号源发出的红外信号;
    频谱监测子模块,被配置为检测所述红外线采集子模块采集的所述红外信号的频谱特征是否与预设频谱相匹配;
    若检测结果为所述红外采集子模块采集的所述红外信号的频谱特征与所述预设频谱相匹配,则确定所述红外信号源所在的位置为所述人体位置。
  10. 根据权利要求7所述的装置,其特征在于,所述第二定位模块,包括:
    表情识别子模块,被配置为对所述摄像头拍摄到的视频画面中的至少一个人脸进行表情识别;
    第一定位子模块,用于将所述表情识别子模块识别出的表情为指定表情的人脸在所述视频画面中的位置确定为所述人脸位置。
  11. 根据权利要求7所述的装置,其特征在于,所述第二定位模块,还包括:
    画面推送子模块,被配置为向与所述摄像头相连接的用户终端推送所述视频画面;
    信息接收子模块,被配置为接收所述用户终端返回的选择指示信息,所述选择指示信息用于指示所述视频画面中包含的人脸;
    第二定位子模块,用于将所述信息接收子模块接收的所述选择指示信息所指示的人脸在所述视频画面中的位置确定为所述人脸位置。
  12. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    距离监测模块,被配置为在根据所述人脸位置对所述摄像头的拍摄角度进行微调之后,监测所述人脸位置对应的人脸图像偏离所述摄像头拍摄的视频画面的中心的距离;
    所述第二调整模块,还被配置为当所述距离监测模块检测到所述人脸图像偏离所述视频画面的中心的距离大于预设距离时,重新根据所述人脸图像的位置对所述摄像头的拍摄角度进行微调,使所述人脸图像处于所述摄像头拍摄到的视频画面的中心。
  13. 一种摄像头拍摄角度调整装置,其特征在于,所述装置包括:
    处理器;
    用于存储所述处理器的可执行指令的存储器;
    其中,所述处理器被配置为:
    确定摄像头可拍摄范围内的人体位置;
    对所述摄像头的拍摄角度进行调整,使所述人体位置处的图像处于所述摄像头拍摄到的视频画面的中心;
    确定所述摄像头拍摄到的视频画面中的人脸位置;
    根据所述人脸位置对所述摄像头的拍摄角度进行微调,使所述人脸位置的图像处于所述摄像头拍摄到的视频画面的中心。
PCT/CN2016/082692 2015-11-27 2016-05-19 摄像头拍摄角度调整方法及装置 WO2017088378A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020167027675A KR20180078106A (ko) 2015-11-27 2016-05-19 카메라 촬영 각도 조정 방법 및 장치
RU2016142694A RU2695104C2 (ru) 2015-11-27 2016-05-19 Способ и аппарат для регулирования угла съемки камеры
JP2016562261A JP6441958B2 (ja) 2015-11-27 2016-05-19 カメラヘッド撮影角度調整方法および装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510849310.5 2015-11-27
CN201510849310.5A CN105357442A (zh) 2015-11-27 2015-11-27 摄像头拍摄角度调整方法及装置

Publications (1)

Publication Number Publication Date
WO2017088378A1 true WO2017088378A1 (zh) 2017-06-01

Family

ID=55333292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/082692 WO2017088378A1 (zh) 2015-11-27 2016-05-19 摄像头拍摄角度调整方法及装置

Country Status (7)

Country Link
US (1) US10375296B2 (zh)
EP (1) EP3174285A1 (zh)
JP (1) JP6441958B2 (zh)
KR (1) KR20180078106A (zh)
CN (1) CN105357442A (zh)
RU (1) RU2695104C2 (zh)
WO (1) WO2017088378A1 (zh)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105357442A (zh) * 2015-11-27 2016-02-24 小米科技有限责任公司 摄像头拍摄角度调整方法及装置
WO2017147827A1 (zh) * 2016-03-02 2017-09-08 武克易 一种图像获取方法
CN106412417A (zh) * 2016-06-21 2017-02-15 北京小米移动软件有限公司 拍摄图像的方法及装置
CN106250885B (zh) * 2016-08-31 2019-10-11 宇龙计算机通信科技(深圳)有限公司 一种虹膜识别方法、装置及终端
CN108632567B (zh) 2017-03-16 2020-07-28 杭州海康威视数字技术股份有限公司 云台控制方法、装置及系统
CN108965789B (zh) * 2017-05-17 2021-03-12 杭州海康威视数字技术股份有限公司 一种无人机监测方法及音视频联动装置
CN107241536A (zh) * 2017-05-27 2017-10-10 捷开通讯(深圳)有限公司 一种具有传感功能的闪光灯、系统及其实现方法
CN107315489A (zh) * 2017-06-30 2017-11-03 联想(北京)有限公司 一种信息处理方法及追踪设备
CN107609462A (zh) * 2017-07-20 2018-01-19 北京百度网讯科技有限公司 待检测信息生成及活体检测方法、装置、设备及存储介质
CN109391762B (zh) * 2017-08-03 2021-10-22 杭州海康威视数字技术股份有限公司 一种跟踪拍摄的方法和装置
CN107498564B (zh) * 2017-09-11 2019-12-10 台州市黄岩八极果蔬专业合作社 一种具备人体表情识别功能的机器人
CN107832720B (zh) * 2017-11-16 2022-07-08 北京百度网讯科技有限公司 基于人工智能的信息处理方法和装置
CN108702458B (zh) 2017-11-30 2021-07-30 深圳市大疆创新科技有限公司 拍摄方法和装置
CN107835376B (zh) * 2017-12-25 2020-06-19 苏州三星电子电脑有限公司 摄制装置与摄制方法
CN110163833B (zh) * 2018-02-12 2021-11-09 杭州海康威视数字技术股份有限公司 确定刀闸的开合状态的方法和装置
CN109190469B (zh) * 2018-07-27 2020-06-23 阿里巴巴集团控股有限公司 一种检测方法及装置、一种计算设备及存储介质
CN108965714A (zh) * 2018-08-01 2018-12-07 上海小蚁科技有限公司 图像采集方法、装置以及计算机储存介质
CN110830708A (zh) * 2018-08-13 2020-02-21 深圳市冠旭电子股份有限公司 一种追踪摄像方法、装置及终端设备
CN109543564A (zh) * 2018-11-02 2019-03-29 北京小米移动软件有限公司 提醒方法及装置
CN111325074A (zh) * 2018-12-17 2020-06-23 青岛海尔多媒体有限公司 电视终端旋转控制的方法、装置及计算机存储介质
CN109712707A (zh) * 2018-12-29 2019-05-03 深圳和而泰数据资源与云技术有限公司 一种舌诊方法、装置、计算设备及计算机存储介质
CN109993143B (zh) * 2019-04-10 2021-09-17 北京旷视科技有限公司 图像采集设备的安装方法、装置、电子设备及存储介质
CN110276292B (zh) * 2019-06-19 2021-09-10 上海商汤智能科技有限公司 智能车运动控制方法及装置、设备和存储介质
CN110533805A (zh) * 2019-07-29 2019-12-03 深圳绿米联创科技有限公司 智能门锁控制的方法、装置、智能门锁及电子设备
CN110460772B (zh) * 2019-08-14 2021-03-09 广州织点智能科技有限公司 摄像头自动调节方法、装置、设备和存储介质
CN110533015A (zh) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 验证方法及验证装置、电子设备、计算机可读存储介质
CN111756992A (zh) * 2019-09-23 2020-10-09 广东小天才科技有限公司 一种可穿戴设备跟拍的方法及可穿戴设备
CN111142836B (zh) * 2019-12-28 2023-08-29 深圳创维-Rgb电子有限公司 屏幕朝向角度的调整方法、装置、电子产品及存储介质
CN111681344A (zh) * 2020-04-17 2020-09-18 深圳市华正联实业有限公司 人员免下车信息查验方法及系统
KR102305905B1 (ko) 2020-06-05 2021-09-28 주식회사 더투에이치 삼차원 촬영장치 및 이것의 카메라 각도 조정방법
CN111935442A (zh) * 2020-07-31 2020-11-13 北京字节跳动网络技术有限公司 信息显示方法、装置和电子设备
CN116034580A (zh) * 2020-08-21 2023-04-28 海信视像科技股份有限公司 人像定位方法及显示设备
CN112672062B (zh) * 2020-08-21 2022-08-09 海信视像科技股份有限公司 一种显示设备及人像定位方法
CN112235504A (zh) * 2020-09-18 2021-01-15 珠海银邮光电信息工程有限公司 视频监控定位装置、方法、终端及存储介质
CN112333391A (zh) * 2020-11-03 2021-02-05 深圳创维-Rgb电子有限公司 基于声音的人像自动追踪方法、装置、智能终端及介质
CN113132624B (zh) * 2021-03-01 2023-02-07 广东职业技术学院 一种智能终端的角度调节控制方法及系统
CN114098980B (zh) * 2021-11-19 2024-06-11 武汉联影智融医疗科技有限公司 相机位姿调整方法、空间注册方法、系统和存储介质
CN113382222B (zh) * 2021-05-27 2023-03-31 深圳市瑞立视多媒体科技有限公司 一种在用户移动过程中基于全息沙盘的展示方法
CN113382304B (zh) * 2021-06-07 2023-07-18 北博(厦门)智能科技有限公司 一种基于人工智能技术的视频拼接方法
CN113596303B (zh) * 2021-07-26 2022-09-02 江西师范大学 一种视觉辅助装置及方法
CN114630048B (zh) * 2022-03-16 2023-08-22 湖州师范学院 一种自媒体短视频拍摄装置和方法
CN114706230B (zh) * 2022-04-08 2024-04-19 宁波视睿迪光电有限公司 显示装置及光线角度调节方法
CN115278055B (zh) * 2022-06-24 2024-08-02 维沃移动通信有限公司 拍摄方法、拍摄装置及电子设备
WO2024047737A1 (ja) * 2022-08-30 2024-03-07 日本電気株式会社 情報処理装置、情報処理方法及び記録媒体
CN117119307B (zh) * 2023-10-23 2024-03-08 珠海九松科技有限公司 一种基于深度学习的视频交互方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055059A1 (en) * 2000-05-26 2001-12-27 Nec Corporation Teleconferencing system, camera controller for a teleconferencing system, and camera control method for a teleconferencing system
CN102833476A (zh) * 2012-08-17 2012-12-19 歌尔声学股份有限公司 终端设备用摄像头和终端设备用摄像头的实现方法
CN103475849A (zh) * 2013-09-22 2013-12-25 广东欧珀移动通信有限公司 在视频通话时对摄像头拍摄角度进行调节的方法及装置
CN103841357A (zh) * 2012-11-21 2014-06-04 中兴通讯股份有限公司 基于视频跟踪的麦克风阵列声源定位方法、装置及系统
CN105357442A (zh) * 2015-11-27 2016-02-24 小米科技有限责任公司 摄像头拍摄角度调整方法及装置

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06276514A (ja) * 1993-03-19 1994-09-30 Fujitsu Ltd テレビ会議システムにおけるカメラ制御方式
US5774591A (en) * 1995-12-15 1998-06-30 Xerox Corporation Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images
KR200153695Y1 (ko) * 1996-09-24 1999-08-02 윤종용 전자렌지의 무게감지장치
JPH1141577A (ja) * 1997-07-18 1999-02-12 Fujitsu Ltd 話者位置検出装置
US6444305B2 (en) * 1997-08-29 2002-09-03 3M Innovative Properties Company Contact printable adhesive composition and methods of making thereof
US6850265B1 (en) * 2000-04-13 2005-02-01 Koninklijke Philips Electronics N.V. Method and apparatus for tracking moving objects using combined video and audio information in video conferencing and other applications
JP4581210B2 (ja) * 2000-09-29 2010-11-17 日本電気株式会社 テレビ会議システム
FR2815112B1 (fr) * 2000-10-09 2004-07-16 Alain Triboix Dispositif de climatisation mettant en oeuvre un faux plafond et assurant une diffusion d'air le long des parois
JP3691802B2 (ja) 2002-03-04 2005-09-07 ニスカ株式会社 自動フレーミングカメラ
US6606458B2 (en) * 2001-09-05 2003-08-12 Nisca Corporation Automatic framing camera
US6937745B2 (en) * 2001-12-31 2005-08-30 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
JP2004219277A (ja) 2003-01-15 2004-08-05 Sanyo Electric Co Ltd 人体検知方法およびシステム、プログラム、記録媒体
JP2004295572A (ja) * 2003-03-27 2004-10-21 Matsushita Electric Ind Co Ltd 認証対象画像撮像装置及びその撮像方法
JP2005150834A (ja) 2003-11-11 2005-06-09 Softopia Japan Foundation 監視システム
JP2005208454A (ja) * 2004-01-26 2005-08-04 Konica Minolta Photo Imaging Inc 写真撮影システム、写真撮影方法、及びプログラム
US7917935B2 (en) * 2004-10-01 2011-03-29 Logitech Europe S.A. Mechanical pan, tilt and zoom in a webcam
JP2006311196A (ja) 2005-04-28 2006-11-09 Sony Corp 撮像装置および撮像方法
JP4356663B2 (ja) * 2005-08-17 2009-11-04 ソニー株式会社 カメラ制御装置および電子会議システム
US7918614B2 (en) * 2006-01-20 2011-04-05 Sony Ericsson Mobile Communications Ab Camera for electronic device
US8798671B2 (en) * 2006-07-26 2014-08-05 Motorola Mobility Llc Dual mode apparatus and method for wireless networking configuration
JP5219184B2 (ja) * 2007-04-24 2013-06-26 任天堂株式会社 トレーニングプログラム、トレーニング装置、トレーニングシステムおよびトレーニング方法
NO327899B1 (no) * 2007-07-13 2009-10-19 Tandberg Telecom As Fremgangsmate og system for automatisk kamerakontroll
JP4968929B2 (ja) 2007-07-20 2012-07-04 キヤノン株式会社 画像処理装置及び画像処理方法
KR100904254B1 (ko) 2007-09-06 2009-06-25 연세대학교 산학협력단 비강압적 홍채 영상 획득 시스템 및 그 방법
EP2207342B1 (en) 2009-01-07 2017-12-06 LG Electronics Inc. Mobile terminal and camera image control method thereof
US8599238B2 (en) * 2009-10-16 2013-12-03 Apple Inc. Facial pose improvement with perspective distortion correction
JP4852652B2 (ja) 2010-03-09 2012-01-11 パナソニック株式会社 電子ズーム装置、電子ズーム方法、及びプログラム
JP5725793B2 (ja) * 2010-10-26 2015-05-27 キヤノン株式会社 撮像装置およびその制御方法
KR101275297B1 (ko) * 2011-06-23 2013-06-17 주식회사 삼성항공정보통신 이동객체 추적 카메라 장치
US8585763B2 (en) * 2011-06-30 2013-11-19 Blackstone Medical, Inc. Spring device for locking an expandable support device
DE102012216191A1 (de) * 2011-09-14 2013-03-14 Hitachi Information & Communication Engineering, Ltd. Authentifizierungssystem
EP2764686B1 (en) * 2011-10-07 2019-10-02 Flir Systems, Inc. Smart surveillance camera systems and methods
JP5978639B2 (ja) 2012-02-06 2016-08-24 ソニー株式会社 画像処理装置、画像処理方法、プログラム、及び記録媒体
CN102594990A (zh) * 2012-02-10 2012-07-18 中兴通讯股份有限公司 一种智能手机底座与手机及其实现方法
JP5826206B2 (ja) * 2012-03-29 2015-12-02 富士フイルム株式会社 非線形光学材料及びそれを用いた非線形光学素子
JP2014033265A (ja) * 2012-08-01 2014-02-20 Olympus Imaging Corp 撮像装置、撮像方法およびプログラム
CN103902027A (zh) * 2012-12-26 2014-07-02 鸿富锦精密工业(深圳)有限公司 智能切换装置及其智能切换方法和系统
JP5867424B2 (ja) * 2013-02-28 2016-02-24 ソニー株式会社 画像処理装置、画像処理方法、プログラム
CN103197491B (zh) 2013-03-28 2016-03-30 华为技术有限公司 快速自动聚焦的方法和图像采集装置
CN203301636U (zh) * 2013-05-21 2013-11-20 浙江工业大学 一种可以自动跟踪对象的视频监控设备
US20150146078A1 (en) 2013-11-27 2015-05-28 Cisco Technology, Inc. Shift camera focus based on speaker position
CN104580992B (zh) * 2014-12-31 2018-01-23 广东欧珀移动通信有限公司 一种控制方法及移动终端
CN104992169A (zh) * 2015-07-31 2015-10-21 小米科技有限责任公司 人物的识别方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055059A1 (en) * 2000-05-26 2001-12-27 Nec Corporation Teleconferencing system, camera controller for a teleconferencing system, and camera control method for a teleconferencing system
CN102833476A (zh) * 2012-08-17 2012-12-19 歌尔声学股份有限公司 终端设备用摄像头和终端设备用摄像头的实现方法
CN103841357A (zh) * 2012-11-21 2014-06-04 中兴通讯股份有限公司 基于视频跟踪的麦克风阵列声源定位方法、装置及系统
CN103475849A (zh) * 2013-09-22 2013-12-25 广东欧珀移动通信有限公司 在视频通话时对摄像头拍摄角度进行调节的方法及装置
CN105357442A (zh) * 2015-11-27 2016-02-24 小米科技有限责任公司 摄像头拍摄角度调整方法及装置

Also Published As

Publication number Publication date
RU2016142694A (ru) 2018-05-08
KR20180078106A (ko) 2018-07-09
RU2695104C2 (ru) 2019-07-19
US10375296B2 (en) 2019-08-06
JP2018501671A (ja) 2018-01-18
US20170155829A1 (en) 2017-06-01
RU2016142694A3 (zh) 2018-06-01
CN105357442A (zh) 2016-02-24
JP6441958B2 (ja) 2018-12-19
EP3174285A1 (en) 2017-05-31

Similar Documents

Publication Publication Date Title
WO2017088378A1 (zh) 摄像头拍摄角度调整方法及装置
KR101834674B1 (ko) 영상 촬영 방법 및 장치
EP3125530B1 (en) Video recording method and device
EP2567535B1 (en) Camera system and method for operating a camera system
US10115019B2 (en) Video categorization method and apparatus, and storage medium
WO2016184104A1 (zh) 识别物体的方法及装置
WO2018228422A1 (zh) 一种发出预警信息的方法、装置及系统
WO2017000491A1 (zh) 获取虹膜图像的方法、装置及红膜识别设备
EP3761627B1 (en) Image processing method and apparatus
EP3226119B1 (en) Method and apparatus for displaying image data from a terminal on a wearable display
CN108901108A (zh) 照明设备、照明设备的控制方法及装置
CN112188089A (zh) 距离获取方法及装置、焦距调节方法及装置、测距组件
US20170244891A1 (en) Method for automatically capturing photograph, electronic device and medium
CN106791057B (zh) 闹钟控制方法及装置
CN104954683B (zh) 确定摄像装置的方法及装置
KR20210157289A (ko) 촬영 프리뷰 이미지를 표시하는 방법, 장치 및 매체
CN104616687A (zh) 录音的方法及装置
CN115766927B (zh) 测谎方法、装置、移动终端及存储介质
WO2023225967A1 (zh) 防窥提示方法、装置、电子设备及存储介质
CN112804462B (zh) 多点对焦成像方法及装置、移动终端、存储介质
CN113138384A (zh) 图像采集方法及装置、存储介质
CN112822405A (zh) 对焦方法、装置、设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20167027675

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016562261

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2016142694

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867607

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867607

Country of ref document: EP

Kind code of ref document: A1