CN107622248A - One kind watches identification and interactive approach and device attentively - Google Patents
One kind watches identification and interactive approach and device attentively Download PDFInfo
- Publication number
- CN107622248A CN107622248A CN201710887858.8A CN201710887858A CN107622248A CN 107622248 A CN107622248 A CN 107622248A CN 201710887858 A CN201710887858 A CN 201710887858A CN 107622248 A CN107622248 A CN 107622248A
- Authority
- CN
- China
- Prior art keywords
- face
- video frame
- frame
- camera
- regarding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
One kind watches identification and interactive approach and device attentively, and the method is applied to the electronic installation with camera and steering wheel, and wherein steering wheel is camera to be turned to.Methods described comprises the following steps:Multiple frame of video are obtained using the camera;Detect the current video frame of these frame of video and the current video frame rotated relative to axis of orientation after at least one face in caused rotating video frame;Whether each face detected using the grader identification of training in advance is to regarding camera;And if recognition result confirm to have face to regarding, then map back the position of current video frame according to position of this face in current video frame or by rotating video frame, control steering wheel camera is turned to be identified as to regarding face.
Description
Technical field
The application relates to a kind of interactive approach and device, and watches identification and interactive approach attentively in particular to one kind
With device.
Background technology
Existing interactive device (such as electronic doll, electronic pet or intelligent robot) can or acousto-optic mobile by limbs
Effect is interactive with user, so as to reaching entertainment effect.For example, electronic pet can detect the sound of user, and accordingly become
Change expression or respond action.By the action of summary responses, the effect interactive with user can reach.
However, the action or response of these interactive devices must all pre-define, and during with user's interaction,
Also specific instruction (such as pushing button or send sound) can only be directed to and make simple response action, and can not be according to use
The countenance or body language of person makes appropriate response, fails to embody the effect that person to person is interactive in real scene.
The content of the invention
In view of this, the application provides one kind and watches identification and interactive approach and device attentively, can in simulation of real scenes people with
People talk with when to regarding communicative effect.
The application's watches identification and interactive approach attentively suitable for having the electronic installation of camera and steering wheel, and wherein steering wheel is
Camera to be turned to.Methods described comprises the following steps:Multiple frame of video (video frame) are obtained using camera;
Detect the current video frame of these frame of video and the current video frame rotated relative to axis of orientation after caused rotating video
At least one face in frame;Whether each face detected using the grader identification of training in advance is to regarding camera;With
And if recognition result confirms to have face to regarding then according to position of this face in current video frame or by rotating video frame
Map back the position of current video frame, control steering wheel camera is turned to be identified as to regarding face.
Watch attentively the identification and interactive device of the application includes camera, steering wheel, storage device and processor.Wherein, image
Head is obtaining multiple frame of video.Steering wheel is camera to be turned to.Storage device is storing multiple modules.Processor to
Access and perform the module for being stored in storage device.These modules include video frame rotation module, face detection module, to regarding knowledge
Other module and steering module.Wherein, video frame rotation module rotates the current video frame of the frame of video relative to axis of orientation
For rotating video frame.Face detection module detects at least one face in current video frame and rotating video frame.To regarding identification
Whether each face that module is detected using the grader identification of training in advance is to regarding camera.Steering module is to regarding identification
The recognition result of module confirms to have face to apparent time, according to position of this face in current video frame or by rotating video frame
Map back the position of current video frame, control steering wheel camera is turned to be identified as to regarding face.
Based on above-mentioned, watch attentively identification and the interactive approach of the application are carried out with device by the frame of video obtained to camera
Face datection, and the frame of video be can detect that into the face under various postures according to Face datection is carried out again after axially different rotation.
And by the grader of training in advance the face detected is carried out to regarding identification, can confirm detected face whether to regarding
Camera, and then control camera to turn to the face.Whereby, can simulate when person to person in real situation talks with to regarding exchange
Effect.
For allow the application features described above and advantage can become apparent, hereafter especially for embodiment, and coordinate institute's accompanying drawing
Show and be described in detail below.
Brief description of the drawings
Fig. 1 is according to the block diagram for watching identification and interactive device attentively shown in the embodiment of the present application.
Fig. 2 is to watch identification and interactive approach flow chart attentively according to shown in the embodiment of the present application.
Fig. 3 is the schematic diagram according to the rotating video frame shown in the embodiment of the present application.
Fig. 4 is the schematic diagram turned to according to the control camera shown in the embodiment of the present application.
Fig. 5 is to watch identification and interactive approach flow chart attentively according to shown in the embodiment of the present application.
Embodiment
The application by voice recognition, Face datection and to depending on identification etc. Technology Integration to intelligent robot or other can be with people
Interactive intelligent apparatus.When receiving the sound of user, robot can turn to voice directions so that configuration is in robot
Camera with it can get the frame of video (video frame) of user.And when user watches robot attentively, robot
Face can be detected from frame of video, and identifies detected face whether to regarding machine using the grader of training in advance
People, and then the head of robot is turned into face center (eyes for representing user), can simulate whereby in real situation people with
People talk with when to regarding communicative effect.
Fig. 1 is according to the block diagram for watching identification and interactive device attentively shown in the embodiment of the present application.It refer to Fig. 1, this reality
Apply example watch identification attentively and interactive device 10 be, for example, intelligent robot or other can be interactive with people electronic installation, including
Camera 12, steering wheel 14, storage device 16 and processor 18, its function are described below:
Camera 12 is, for example, to be made up of elements such as camera lens, aperture, shutter, image sensors.Wherein, camera lens includes multiple
Optical lens, it is, for example, to be driven by the actuator such as stepper motor or voice coil motor (Voice Coil Motor, VCM), with
Change the relative position between lens, so as to change focal length.Aperture is the round perforate being made up of many metal blades, this perforate
It can hold big or reduce with the size of f-number, and then control the light-inletting quantity of camera lens.Shutter is then to control light entry into mirror
The time length of head, the combination of itself and aperture can influence the light exposure of image acquired in image sensor.Image sensor is for example
It is by charge coupled cell (Charge Coupled Device, CCD), complementary metal semi-oxidized thing semiconductor
(Complementary Metal-Oxide Semiconductor, CMOS) element or other kinds of photo-sensitive cell composition, its
The light intensity into camera lens can be sensed to produce the frame of video of object.
Steering wheel 14 is, for example, servomotor, and it is configured in the lower section of camera 12 or surrounding, and can be according to processor 18
Control signal, camera 12 is promoted to change its position and/or angle.
Storage device 16 can be the fixed or packaged type random access memory (random of any kenel
Access memory, RAM), read-only storage (read-only memory, ROM), flash memory (flash memory), hard disk drive
Dynamic device (hard disk drive, HDD), solid state hard disc (solid state drive, SSD) or similar component or said elements
Combination.In the present embodiment, storage device 16 to store face detection module 162, video frame rotation module 164, to regarding
The software program of identification module 166 and steering module 168.
Processor 18 is, for example, CPU (Central Processing Unit, CPU), or other can be compiled
The microprocessor (Microprocessor) of journey, number subsignal processor (Digital Signal Processor, DSP), can
Programmable controller, ASIC (Application Specific Integrated Circuit, ASIC) or its
The combination of his similar component or said elements.In the present embodiment, processor 18 is accessing and perform above-mentioned storage device 16
Middle stored module, so as to realize the embodiment of the present application watch attentively identification and interactive approach.
Fig. 2 is to watch identification and interactive approach flow chart attentively according to shown in the embodiment of the present application.Referring to Fig. 1 and figure
2, the method for the present embodiment be applied to it is above-mentioned watch identification and interactive device 10 attentively, watch identification and mutually in collocation Fig. 1 attentively below
Each item of dynamic device 10, illustrate the detailed process of the present embodiment method.
First, camera 12 is controlled to obtain multiple frame of video (step S202) by processor 18.Then, held by processor 18
Row video frame rotation module 164, current video frame is rotated to be into rotating video frame relative to axis of orientation, and perform Face datection
Module 162, to detect at least one face (step S204) in current video frame and rotating video frame.Wherein, Face datection
Module 162 can for example perform viola-Jones (Viola-Jones) detection method or other people face detection algorithms, to handle immediately
Frame of video or postrotational frame of video acquired in camera 12, and detect the face appeared in these frame of video.
Specifically, in the initial scene interactive with people, face, which may not faced, watches identification and interactive device attentively
10, this cause the face be likely to be in the frame of video acquired in camera 12 side it is opposite or askew against watch attentively identification and
Interactive device 10.On the other hand, the present embodiment be, for example, by by current video frame to trunnion axis or vertical axis with the clockwise or inverse time
The direction of pin rotates some angle, in order to which face detection module 162 carries out face.And by repeat above-mentioned rotating video frame and
The step of detecting face, have an opportunity to become a full member face originally crooked in frame of video so that face detection module 162 can be smooth
Detect face in ground.
For example, Fig. 3 is the schematic diagram according to the rotating video frame shown in the embodiment of the present application.It refer to Fig. 3, it is assumed that
X-axis, y-axis, 3 axis of orientations that z-axis is three dimensions, wherein xz planes are horizontal plane, x/y plane is vertical plane.Shown in Fig. 3
The rotation represented in horizontal direction is rotated to x-axis (being turned clockwise to y-axis) by z-axis, and rotated in Fig. 3 by y-axis to x-axis
The rotation in vertical direction is represented (to z-axis rotate counterclockwise).And by by frame of video to different directions axle carry out clockwise or
Rotate counterclockwise, and Face datection is performed after rotation, you can remain able to detect face under various human face postures.
It should be noted that because same face can may be detected in original video frame and postrotational frame of video simultaneously
Measure, but actually represent same person.On the other hand, the embodiment of the present application provides a kind of side using area than excluding identical face
Formula, if the effective area of the face detected on other directions (i.e. rear swivel to) than more than certain threshold value, will
The face is considered as identical face, abandons preserving the information of the face, and the face will not be carried out to regarding identification.Above-mentioned is effective
Area ratio can be understood as repeating area ratio, and the above-mentioned face detected respectively in rotating video frame and original video frame is such as
Fruit has repetition, and repetitive rate exceedes certain threshold value, then only carries out follow-up to that face detected in original video frame
To regarding identification, without carrying out the face detected in rotating video frame to regarding identification.Whereby, it is ensured that in a width frame of video
Face only carry out once to regarding identification verify, avoid repeating.It is noted that the target of above-mentioned Face datection is original video
Frame and all faces in rotating video frame, and all faces are carried out respectively and only once identified.
Specifically, face detection module 162, can be further by rotating video after the face in detecting rotating video frame
Face in frame maps back current video frame, and is compared with the corresponding face in position in current video frame, judges to map
Return the face of current video frame and the overlapping area of face in current video frame originally with originally in current video frame
Whether the ratio of the original area of face is more than a threshold value.If this ratio is more than threshold value, represent in rotating video frame
The face detected and the face detected in current video frame are to belong to same person, and now face detection module 162 will
Be present the information of the face detected in rotating video frame in the meeting waiver of premium, and the face will not be carried out to regarding identification, avoiding weight
Multiple identification.
Then, performed by processor 18 to regarding identification module 166, being identified using the grader of training in advance by Face datection
Whether each face detected by module 162 is to regarding camera 12, to be confirmed whether to have face to regarding (step S206).Specifically
For, to can for example gather substantial amounts of face image in advance depending on identification module 166, judged by user in each face image
Face whether to regarding camera, so as in each face image mark to sighting target label.Whereby, to being depending on identification module 166
These face images and its corresponding to one neutral net of sighting target label training can be utilized, with obtain to identify face to regarding
Grader.Above-mentioned neutral net is for example including level 2 volume lamination (convolutional layer), 2 layers of full articulamentum (Fully
Connected layer) and 1 layer of output layer using softmax functions, but not limited to this.Those skilled in the art are visually real
Border need, using the convolutional layer including different numbers and various combination, pond layer (pooling), full articulamentum, output layer convolution
Neutral net or other kinds of neutral net.
Finally, having face to apparent time is being confirmed to the recognition result depending on identification module 166, steering is performed by processor 18
Module 168, to map back current video by rotating video frame according to position of this face in current video frame or this face
The position of frame, control steering wheel 14 by camera 12 turn to be identified as to regarding face (step S208).Specifically, for
Face in rotating video frame be identified as to regarding situation, to can first the position of the face be mapped back depending on identification module 166
Current video frame, using the foundation turned to as control camera 12.As an example it is assumed that the anglec of rotation of rotating video frame is α,
Face location (the x detected0,y0), the width of former frame of video is w, is highly h, then maps back the position (x, y) of former frame of video
For:
Rotate counterclockwise:
Turn clockwise:
It should be noted that in embodiment, steering module 168 is, for example, that current video frame is divided into multiple regions, and
Deviate these areas according to position of the face in current video frame or the position that current video frame is mapped back by rotating video frame
Camera 12 is turned to face by the distance of the central area in domain and direction, control steering wheel 14 so that face is taken the photograph after being located at steering
The central area of frame of video as acquired in first 12.In another embodiment, can be by by the steering range and video of camera
The width w of frame is corresponding, by the pixel difference in the off-center region of face, calculates direction and angle that camera should rotate.
In another embodiment, can by translating the position of camera, or translation and rotating camera simultaneously so that face can be located at it is flat
The central area of the frame of video acquired in rear camera 12 is moved and/or rotated, is not limited herein.
For example, Fig. 4 is the schematic diagram turned to according to the control camera shown in the embodiment of the present application.It refer to Fig. 4,
Assuming that frame of video 40 be by camera obtain frame of video, face 42 therein be identified to regarding face.Such as Fig. 4 institutes
Show, frame of video 40 is divided into 9 regions, wherein be identified to regarding face 42 be located at lower right area 40b.And according to this
The distance of the off-center region 40a (center position) in the position of face 42 (such as center position of face 42) and side
To controllable camera turns to (by taking the present embodiment as an example, turned to towards lower right) in the opposite direction so that the face 42 is turning
It is to be located at central area 40a into the frame of video acquired in rear camera.By the center that face 42 is maintained to frame of video 40
Region 40a, you can realize camera turn to regarding position.
Watch identification and interactive approach attentively by above-mentioned, may recognize that around whether someone watch the present embodiment attentively watch knowledge attentively
Not and interactive device 10, and will watch attentively identification and interactive device 10 turn to towards eyer's face, whereby can simulation of real scenes
In when talking with people to regarding communicative effect.
It should be noted that in the initial scene interactive with people, face may not occur in the visual field of device camera,
Even if face is occurred in the visual field of camera and to regarding camera, this is also likely to be that just sight is inswept, not spy
Meaning is watched attentively.On the other hand, the application provides another embodiment, can solve the above problems, so as to obtain more preferable recognition effect.
Specifically, Fig. 5 is to watch identification and interactive approach flow chart attentively according to shown in the embodiment of the present application.Please join simultaneously
According to Fig. 1 and Fig. 5, the method for the present embodiment watches identification and interactive device 10 attentively suitable for above-mentioned, is noted below in collocation Fig. 1
Depending on identification and each item of interactive device 10, illustrate the detailed process of the present embodiment method.
First, audio is received using audio signal reception device by processor 18, and judges the source direction of audio, to control steering wheel 14
Camera 12 is turned into this source direction (step S502).Described audio signal reception device is, for example, microphone, directional microphone, wheat
Gram wind array etc. can identify the device in sound source direction, not limit herein.And by by camera 12 turn to audio come
Source direction, it can be ensured that camera 12 can get comprising send audio face frame of video, so as to carry out it is follow-up to regarding
Identification.
Receive, control camera 12 to obtain multiple frame of video (step S504) by processor 18.Performed and regarded by processor 18
Frequency frame rotary module 164, current video frame is rotated to be into rotating video frame relative to axis of orientation, and perform face detection module
162, to detect at least one face (step S506) in current video frame and rotating video frame, and perform to regarding identification mould
Block 166, whether identified using the grader of training in advance as each face detected by face detection module 162 to regarding shooting
First 12, to be confirmed whether to have face to regarding (step S508).Above-mentioned steps S504~S508 and previous embodiment step S202
~S206 is same or similar, therefore its detailed content will not be repeated here.
As long as identify face to having face to regarding this depending on i.e. confirmation relative to current video frame in the aforementioned embodiment
Embodiment then need continuous multiple frame of video identify face to depending on just confirmation have face to regarding.Accordingly, when to regarding knowledge
Other module 166 identified in step S510 current video frame have face to regarding afterwards, can judge continuously to be determined with face to regarding
The number of frame of video whether be more than preset number (step S510).
If it is determined that have face to regarding the number of frame of video be not more than preset number, then flow can return to step S504,
Control camera 12 to obtain next frame of video by processor 18, and by face detection module 162 continue to detect next frame of video and
Face in its postrotational rotating video frame, and by identified depending on identification module 166 each face detected whether to regarding
Camera 12, with judge next frame of video whether have face to regarding.If it is determined that there is face to regarding can then add up and continuously be determined with
Face to regarding frame of video number, and judged into step S510.
If it is determined that have face to regarding the number of frame of video be more than preset number, that is, confirm to have face to regarding now locating
Reason device 18 can perform steering module 168, with position or face of the foundation face in current video frame by rotating video frame
Map back the position of current video frame, control steering wheel 14 camera 12 is turned to be identified as to regarding face (step S512).
Above-mentioned forward method has been exposed in previous embodiment, therefore its detailed content will not be repeated here.
By the source direction that camera 12 is turned to audio, it can be ensured that camera 12, which can get to include, sends audio
Face frame of video, and by continuously detecting whether multiple frame of video have face to regarding can then confirm being intended that for user
No is really to watch attentively.Whereby, more preferable recognition effect can be obtained.
In summary, the application watch identification attentively and interactive approach and device can be when camera shoot frame of video, by rear
Platform system carries out instant Face datection and to regarding identification, and automatic control and adjustment camera turns to.Whereby, whenever having detected
During watching, camera (or head of the robot comprising camera) can turn to therewith to regarding approximate so as to reach at once
In real scene during interpersonal exchange sight to regarding effect.
Although the application is disclosed above with embodiment, but it is not limited to the application, any affiliated technology neck
Field technique personnel, do not departing from spirit and scope, should can make some and change and retouch, therefore the application
Protection domain be defined by appended claim limited range.
Description of reference numerals
10:Watch identification and interactive device attentively
12:Camera
14:Steering wheel
16:Storage device
18:Processor
40:Frame of video
40a:Middle section
40b:Lower right area
42:Face
S202~S208, S502~S512:Step.
Claims (12)
1. one kind watches identification and interactive approach attentively, suitable for the electronic installation with camera and steering wheel, the steering wheel is to general
The camera turns to, and methods described comprises the following steps:
Multiple frame of video are obtained using the camera;
Detect the current video frame of the frame of video and the current video frame rotated relative to axis of orientation after caused rotation
At least one face in frame of video;
Whether each face detected using the grader identification of training in advance is to regarding the camera;And
If recognition result confirms to have face to regarding according to position of the face in the current video frame or by institute
The position that rotating video frame maps back the current video frame is stated, controls the steering wheel to turn to the camera and is identified as pair
Depending on the face.
2. as claimed in claim 1 watch identification and interactive approach attentively, wherein the electronic installation also includes audio signal reception device, and
Before the step of frame of video being obtained using the camera, in addition to:
Audio is received using the audio signal reception device, and judges the source direction of the audio, to control the steering wheel to be taken the photograph described
As head turns to the source direction.
3. as claimed in claim 1 watch identification and interactive approach attentively, detect the current video frame of the frame of video with it is described
After current video frame rotates relative to the axis of orientation the step of face in the caused rotating video frame after,
Also include:
Judge the face in the rotating video frame map back after the current video frame with the current video frame middle position
Whether the ratio for putting original area of the overlapping area of the corresponding face with the face in the current video frame is big
In threshold value;And
If the ratio is more than the threshold value, abandon preserving the information of the face in the rotating video frame.
4. as claimed in claim 1 watch identification and interactive approach attentively, wherein being identified using the grader of training in advance
Each face detected whether to before the step of camera, in addition to:
Substantial amounts of face image is gathered, and whether sighting target is noted according to the face in each face image to sighting target label;And
Using the face image and its corresponding described to sighting target label training neutral net, with obtain to identify to regarding institute
State grader.
5. as claimed in claim 1 watch identification and interactive approach attentively, wherein being identified using the grader of training in advance
Each face detected whether to after the step of camera, in addition to:
Detect the current video frame next frame of video and its postrotational rotating video frame in the face, and identify institute
Each face of detection whether to regarding the camera, with judge next frame of video whether have face to regarding;And
Repeat the above steps, and be continuously determined with face to regarding the number of the frame of video be more than preset number when, confirm
Have face to regarding.
6. as claimed in claim 1 watch identification and interactive approach attentively, wherein controlling the steering wheel that the camera is turned into quilt
Be identified as to depending on the face the step of include:
The current video frame is divided into multiple regions, and according to position of the face in the current video frame or
Distance and the direction of the central area in the position deviation region of the current video frame are mapped back by the rotating video frame,
Control the steering wheel that the camera is turned into the face so that the face is after turning to acquired in the camera
The central area of the frame of video.
7. one kind watches identification and interactive device attentively, including:
Camera, obtain multiple frame of video;
Steering wheel, the camera is turned to;
Storage device, store multiple modules;And
Processor, access and perform the module, the module includes:
Video frame rotation module, the current video frame of the frame of video is rotated to be into rotating video frame relative to axis of orientation;
Face detection module, detect the current video frame and at least one face in the rotating video frame;
To regarding identification module, whether each face detected using the grader identification of training in advance is to regarding the shooting
Head;And
Steering module, confirm there is face to apparent time in the described pair of recognition result depending on identification module, according to the face in described
Position in current video frame or the position that the current video frame is mapped back by the rotating video frame, control the steering wheel
By the camera turn to be identified as to regarding the face.
8. watch identification and interactive device attentively as claimed in claim 7, in addition to:
Audio signal reception device, audio is received, wherein the steering module also judges the source direction of the audio, to control the steering wheel
The camera is turned into the source direction.
9. as claimed in claim 7 watch identification and interactive device attentively, wherein the face detection module also judges the rotation
The face in frame of video maps back the face corresponding with position in the current video frame after the current video frame
The ratio of overlapping area and original area of the face in the current video frame whether be more than threshold value, and if described
Ratio is more than the threshold value, then abandons preserving the information of the face in the rotating video frame.
10. as claimed in claim 7 watch identification and interactive device attentively, wherein to also gathering substantial amounts of face shadow depending on identification module
Picture, and according to the face in each face image whether to sighting target note to sighting target label, and using the face image and its
It is corresponding described to sighting target label training neutral net, with obtain to identify to regarding the grader.
11. as claimed in claim 7 watch identification and interactive device attentively, wherein:
The face detection module is also detected in next frame of video and its postrotational rotating video frame of the current video frame
The face;And
Whether described pair also identify each face detected depending on identification module to regarding the camera, described next to judge
Frame of video whether have face to regarding, and be continuously determined with face to regarding the number of the frame of video be more than preset number when,
Confirmation have face to regarding.
12. as claimed in claim 7 watch identification and interactive device attentively, wherein the steering module is included the current video
Frame is divided into multiple regions, and is reflected according to position of the face in the current video frame or by the rotating video frame
Distance and the direction of the central area in the position deviation region of the current video frame are emitted back towards, controls the steering wheel by described in
Camera turns to the face so that the frame of video of the face after turning to acquired in the camera it is described in
Heart district domain.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710887858.8A CN107622248B (en) | 2017-09-27 | 2017-09-27 | Gaze identification and interaction method and device |
TW106138396A TWI683575B (en) | 2017-09-27 | 2017-11-07 | Method and apparatus for gaze recognition and interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710887858.8A CN107622248B (en) | 2017-09-27 | 2017-09-27 | Gaze identification and interaction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107622248A true CN107622248A (en) | 2018-01-23 |
CN107622248B CN107622248B (en) | 2020-11-10 |
Family
ID=61090845
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710887858.8A Active CN107622248B (en) | 2017-09-27 | 2017-09-27 | Gaze identification and interaction method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107622248B (en) |
TW (1) | TWI683575B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108319937A (en) * | 2018-03-28 | 2018-07-24 | 北京市商汤科技开发有限公司 | Method for detecting human face and device |
CN108388857A (en) * | 2018-02-11 | 2018-08-10 | 广东欧珀移动通信有限公司 | Method for detecting human face and relevant device |
CN113635833A (en) * | 2020-04-26 | 2021-11-12 | 晋城三赢精密电子有限公司 | Vehicle-mounted display device, method and system based on automobile A column and storage medium |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010026520A2 (en) * | 2008-09-03 | 2010-03-11 | Koninklijke Philips Electronics N.V. | Method of performing a gaze-based interaction between a user and an interactive display system |
CN101917548A (en) * | 2010-08-11 | 2010-12-15 | 无锡中星微电子有限公司 | Image pickup device and method for adaptively adjusting picture |
CN102143314A (en) * | 2010-02-02 | 2011-08-03 | 鸿富锦精密工业(深圳)有限公司 | Control system and method of pan/tile/zoom (PTZ) camera as well as adjusting device with control system |
EP2790126A1 (en) * | 2013-04-08 | 2014-10-15 | Cogisen SRL | Method for gaze tracking |
CN105763829A (en) * | 2014-12-18 | 2016-07-13 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105898136A (en) * | 2015-11-17 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Camera angle adjustment method, system and television |
CN106407882A (en) * | 2016-07-26 | 2017-02-15 | 河源市勇艺达科技股份有限公司 | Method and apparatus for realizing head rotation of robot by face detection |
CN106412420A (en) * | 2016-08-25 | 2017-02-15 | 安徽华夏显示技术股份有限公司 | Interactive photographing implementation method |
CN206200967U (en) * | 2016-09-09 | 2017-05-31 | 南京玛锶腾智能科技有限公司 | Robot target positioning follows system |
-
2017
- 2017-09-27 CN CN201710887858.8A patent/CN107622248B/en active Active
- 2017-11-07 TW TW106138396A patent/TWI683575B/en active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010026520A2 (en) * | 2008-09-03 | 2010-03-11 | Koninklijke Philips Electronics N.V. | Method of performing a gaze-based interaction between a user and an interactive display system |
CN102143314A (en) * | 2010-02-02 | 2011-08-03 | 鸿富锦精密工业(深圳)有限公司 | Control system and method of pan/tile/zoom (PTZ) camera as well as adjusting device with control system |
CN101917548A (en) * | 2010-08-11 | 2010-12-15 | 无锡中星微电子有限公司 | Image pickup device and method for adaptively adjusting picture |
EP2790126A1 (en) * | 2013-04-08 | 2014-10-15 | Cogisen SRL | Method for gaze tracking |
CN105763829A (en) * | 2014-12-18 | 2016-07-13 | 联想(北京)有限公司 | Image processing method and electronic device |
CN105898136A (en) * | 2015-11-17 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Camera angle adjustment method, system and television |
CN106407882A (en) * | 2016-07-26 | 2017-02-15 | 河源市勇艺达科技股份有限公司 | Method and apparatus for realizing head rotation of robot by face detection |
CN106412420A (en) * | 2016-08-25 | 2017-02-15 | 安徽华夏显示技术股份有限公司 | Interactive photographing implementation method |
CN206200967U (en) * | 2016-09-09 | 2017-05-31 | 南京玛锶腾智能科技有限公司 | Robot target positioning follows system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108388857A (en) * | 2018-02-11 | 2018-08-10 | 广东欧珀移动通信有限公司 | Method for detecting human face and relevant device |
CN108319937A (en) * | 2018-03-28 | 2018-07-24 | 北京市商汤科技开发有限公司 | Method for detecting human face and device |
CN113635833A (en) * | 2020-04-26 | 2021-11-12 | 晋城三赢精密电子有限公司 | Vehicle-mounted display device, method and system based on automobile A column and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107622248B (en) | 2020-11-10 |
TW201916669A (en) | 2019-04-16 |
TWI683575B (en) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11852732B2 (en) | System and method of capturing and generating panoramic three-dimensional images | |
CN110249626B (en) | Method and device for realizing augmented reality image, terminal equipment and storage medium | |
EP2993894B1 (en) | Image capturing method and electronic apparatus | |
CN107507243A (en) | A kind of camera parameters method of adjustment, instructor in broadcasting's video camera and system | |
WO2018205104A1 (en) | Unmanned aerial vehicle capture control method, unmanned aerial vehicle capturing method, control terminal, unmanned aerial vehicle control device, and unmanned aerial vehicle | |
CN107343165A (en) | A kind of monitoring method, equipment and system | |
CN106851094A (en) | A kind of information processing method and device | |
CN107622248A (en) | One kind watches identification and interactive approach and device attentively | |
CN106559664A (en) | The filming apparatus and equipment of three-dimensional panoramic image | |
CN110337806A (en) | Group picture image pickup method and device | |
US20240244330A1 (en) | Systems and methods for capturing and generating panoramic three-dimensional models and images | |
CN113870213A (en) | Image display method, image display device, storage medium, and electronic apparatus | |
US10817992B2 (en) | Systems and methods to create a dynamic blur effect in visual content | |
CN105049715B (en) | A kind of flash photographing method and mobile terminal | |
CN107094267A (en) | Web TV with user identification function | |
US20240353563A1 (en) | System and method of capturing and generating panoramic three-dimensional images | |
CN107948522B (en) | Method, device, terminal and storage medium for selecting shot person head portrait by camera | |
CN117528237A (en) | Adjustment method and device for virtual camera | |
CN116017133A (en) | Image data acquisition method and device, computer equipment and storage medium | |
Huhle et al. | Why HDR is important for 3DTV model acquisition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |