CN111246116A - Method for intelligent framing display on screen and mobile terminal - Google Patents

Method for intelligent framing display on screen and mobile terminal Download PDF

Info

Publication number
CN111246116A
CN111246116A CN202010200994.7A CN202010200994A CN111246116A CN 111246116 A CN111246116 A CN 111246116A CN 202010200994 A CN202010200994 A CN 202010200994A CN 111246116 A CN111246116 A CN 111246116A
Authority
CN
China
Prior art keywords
screen
visual field
depth sensor
eyes
field observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010200994.7A
Other languages
Chinese (zh)
Other versions
CN111246116B (en
Inventor
谌春亮
夏俊驰
陈志军
谌琴钰
周琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010200994.7A priority Critical patent/CN111246116B/en
Publication of CN111246116A publication Critical patent/CN111246116A/en
Application granted granted Critical
Publication of CN111246116B publication Critical patent/CN111246116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming

Abstract

The invention belongs to the field of artificial intelligence, and relates to a method and a mobile terminal for intelligent framing display on a screen, which solve the problem that a display target cannot be accurately realized, such as manual amplification, framing and focusing. The space coordinates of human eyes are collected at any moment, the screen does not need to be adjusted when a person moves, and the view is intelligently looked up; when the frame is narrow enough, the screen can be invisible and can be falsified.

Description

Method for intelligent framing display on screen and mobile terminal
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a method and a mobile terminal for intelligent framing display on a screen.
Background
The current development wave of the artificial intelligence industry mainly comes from the proposal of a deep learning algorithm, realizes large-scale calculation on the basis of data volume and calculation capacity, and belongs to technical breakthrough. Belongs to super artificial intelligence, and has room for continuous breakthrough in the basic theory research on the aspects of consciousness origin, human brain mechanism and the like.
The development fields of the entrepreneurship companies of the artificial intelligence industry in China are different, the computer vision field has the most entrepreneurship companies, the service robot field is the second field, the third ranking field is the speech and natural language processing field, and the intelligent medical treatment, machine learning, intelligent driving and the like are one of the more popular fields. The computer vision technology is one of the important core technologies of artificial intelligence, can be applied to the fields of security, finance, hardware, marketing, driving, medical treatment and the like, and the computer vision technology level of China reaches the global leading level at present, so that the wide commercialization channel and the technical foundation are the main reasons of becoming the hottest fields.
The artificial intelligence industry chain can be divided into an infrastructure layer, an application technology layer and an industry application layer. Infrastructure layer: the method mainly comprises a basic data provider, a semiconductor chip provider, a sensor provider and a cloud service provider; an application technology layer: mainly comprises a voice recognition, a natural language processing, a computer vision and a deep learning technology provider; an industry application layer: the method mainly integrates artificial intelligence related technology into products and services of the user, and then cuts into a specific scene. At present, the fields of automatic driving, medical treatment, security protection, finance, marketing and the like are the better directions for the general public in the industry.
The intelligent viewing technology appears by means of the climax of full screen and artificial intelligence development of various mobile terminals. The mobile terminal is made into a full screen, and if the scene behind the screen is properly viewed and displayed, the mobile terminal becomes transparent and becomes possible. The view finding display needs to focus and amplify the original shot scenery, and manual focusing and amplifying are not preferable, because manual operation is not accurate, once the person moves, the eyes can move accordingly, the amplified area can be changed, manual operation cannot react, and artificial intelligence relies on the fast reaction speed, accurate operation and can be competent for the work.
Disclosure of Invention
In order to solve the problem that a display target cannot be accurately realized, such as manual amplification, framing and focusing, the invention provides a method and a mobile terminal for intelligent framing display on a screen. The space coordinates of human eyes are collected at any moment, the screen does not need to be adjusted when a person moves, and the view is intelligently looked up; when the frame is narrow enough, the screen can be invisible and can be falsified.
The technical scheme of the invention is to provide a method for intelligently viewing and displaying on a screen, which comprises the following steps:
step 1, determining parameters of a mobile terminal;
determining the length and width of a screen of the mobile terminal, the thickness of the mobile terminal, a wide-angle camera shooting area of a rear wide-angle camera and the azimuth distance between a front depth sensor and the rear wide-angle camera;
step 2, recognizing the human face and obtaining the space coordinates of the target human eyes;
2.1) the front depth sensor collects a three-dimensional image of a human face in front of the front depth sensor;
2.2) analyzing the three-dimensional image of the face collected by the front depth sensor, and identifying the information of the target face and the information of eyes;
2.3) extracting three-dimensional space coordinates of the target human eyes, namely the direction and the distance of the eyes at the front depth sensor according to the target human face information and the eye information; establishing a space coordinate system by taking the screen as an xy reference surface, taking the upper right as a positive direction, taking the front depth sensor as a coordinate origin and taking the front of the screen as a positive direction of a z axis;
step 3, determining a visual field observation area;
modeling by three-dimensional space coordinates of the eyes of a target person, and projecting to the periphery of a screen according to the determined parameters of the mobile terminal, wherein the enclosed area is a visual field observation area;
step 4, intercepting a view observation area in the wide-angle camera shooting area;
in a region where the visual field observation region and the wide-angle shooting region are intersected, intercepting the visual field observation region behind the screen by using an L plane parallel to the xy reference plane, wherein the L plane is defined as a plane where a focus is located;
the visual field observation region extends outwards from the direction of eyes, when the visual field observation region is initially and completely located in the wide-angle shooting region, the distance from the L 'plane to the xy reference plane at the moment is defined as the shortest intelligent viewing distance, the size and the position of the visual field observation region occupying the wide-angle shooting region at the moment are calculated in the L plane outside the shortest intelligent viewing distance, then an image with the same size as the visual field observation region at the moment is intercepted at the position of the L plane and projected on a screen, the L plane is parallel to the L' plane, and the distance between the L plane and the xy reference plane is larger than or equal to the shortest intelligent viewing distance.
Further, the coordinate of the midpoint of the line connecting the two eyes in the step 2.3) is used as the three-dimensional space coordinate of the human eyes.
Further, in order to accurately obtain the target face information, a face recognition method based on the feature face PCA is adopted in the step 2.2) to recognize the target face information.
The invention also provides a mobile terminal for intelligent framing display on a screen, which is characterized in that: comprises a hardware part and a software part;
the hardware part includes: the device comprises a front depth sensor, a screen and a rear wide-angle camera;
the front depth sensor is used for acquiring a three-dimensional image of a human face in front of the front depth sensor;
the software part comprises: the system comprises a face recognition analysis module, an eye coordinate analysis module, a visual field observation area analysis module and a shortest intelligent viewing distance analysis module;
the face recognition analysis module is used for recognizing target face information and eye information by analyzing a face three-dimensional image collected by the front depth sensor;
the eye coordinate analysis module is used for extracting three-dimensional space coordinates of eyes, namely the position and the distance of the eyes at the front depth sensor according to the target face information and the eye information;
the visual field observation area analysis module is used for determining a visual field observation area formed by the eye observation screen according to the three-dimensional space coordinates of the eyes and the parameters of the mobile terminal, such as the position, the length and the width of the screen, the thickness of the mobile terminal, the wide-angle camera area of the rear wide-angle camera and the azimuth distance between the front depth sensor and the rear wide-angle camera, wherein the observation area is not fixed and is changed along with the movement of the eyes;
the shortest intelligent framing distance analysis module is used for judging the shortest intelligent framing distance according to the intersection area of the wide-angle camera shooting area of the rear wide-angle camera and the visual field observation area, and framing distortion effects can be generated when the shortest intelligent framing distance is smaller than the intersection area.
Further, the front depth sensor is a TOF camera.
Compared with the prior art, the invention has the beneficial effects that:
(1) the most natural observation effect.
In a screen with a constantly refreshed captured image, it is felt that objects in the screen will change with the observed position, and it seems that what is seen is a real object and not an image. The most real scene can be seen by human eyes through intelligent display, and the most natural observation effect can be achieved.
(2) No manual adjustment is required.
The front depth sensor of the invention always detects the positions of human eyes, and can accurately view the shot scenery in time when a person moves as long as the CPU processing speed is fast enough and the face recognition mode based on the characteristic face is matched (the calculation speed is fast).
(3) And is more intelligent.
Through face recognition, the screen filters other faces in front of the screen to avoid interference by other faces, so that the scenery which the user should see is correctly displayed on the screen.
Drawings
FIG. 1 is an overall frame diagram of the present invention;
FIG. 2 is a diagram showing the effect of ordinary photography;
FIG. 3 is a diagram illustrating the effect of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. The embodiments described herein are only for explaining the technical solution of the present invention and are not limited to the present invention.
The invention provides a method for intelligently viewing and displaying on a screen, which comprises the following steps:
(1) calculating parameters such as the length and the width of a screen, the thickness of a mobile phone, the shooting angle of a rear camera, the azimuth distance between a sensor and the rear camera and the like;
(2) establishing a space coordinate system by taking the screen as an xy reference surface, taking the upper right as a positive direction, taking the front depth sensor as a coordinate origin and taking the front of the screen as a positive direction of a z axis;
(3) as shown in fig. 1, a front sensor (such as a TOF camera) collects a three-dimensional image in front, so long as a human face is in a collection area (area a), the collection can be realized, the human face is identified after the collection, a target human face is found out, and a spatial coordinate of a target human eye is calculated, wherein the coordinate can be replaced by a midpoint of a connecting line of two eyes;
(4) modeling by known human eye space coordinates, projecting to the periphery of a display screen, wherein the enclosed area is the human visual field area; that is, a region c in the figure, which is merged with the wide-angle imaging region (region b), and the wide-angle imaging region is much larger than the visual field observation region, so that the wide-angle imaging region easily includes the entire visual field observation region.
(5) Intercepting a human eye observation area behind the screen by using an L plane parallel to the parallel reference plane; when the wide-angle camera shooting area comprises all the visual field observation areas, the distance from the L 'plane to the xy reference plane at the moment is defined as the shortest intelligent framing distance, wherein the L' plane is parallel to the L plane, the distance of a camera shooting object is not less than the length, the intelligent framing can be carried out by adopting the method, otherwise, distortion occurs, the framing can be carried out at the position of the L plane, and the distance between the L plane and the xy reference plane is more than or equal to the shortest intelligent framing distance. The shortest viewing distance can change along with the movement of human eyes, the farther the human eyes are away from the screen, the closer the shortest intelligent viewing distance is, the larger the inclination angle between the connecting line of the human eyes and the rear camera and the screen is, and the closer the shortest intelligent viewing distance is. In fig. 1, the L plane is at the shortest intelligent viewing distance, and the L' plane coincides with the L plane.
The corresponding mobile terminal includes a hardware part and an algorithm analysis part. The hardware part comprises: the device comprises a front depth sensor, a screen and a rear wide-angle camera; the algorithm analysis part comprises: the system comprises a face recognition analysis module, an eye coordinate analysis module, a visual field observation area analysis module and a shortest intelligent viewing distance analysis module.
The front depth sensor in the embodiment can be a TOF camera or other types of depth sensors and is used for acquiring a three-dimensional image of a human face in front of the front depth sensor; the screen adopts a display which is commonly used for displaying images; the rear wide-angle camera shoots the situation behind the screen. The face recognition analysis module is used for recognizing target face information and eye information by analyzing a face three-dimensional image collected by the front depth sensor; specifically, a face recognition method based on a characteristic face (PCA) can be adopted, wherein the characteristic face is generated and is represented by a group of characteristic vectors; a set of eigenfaces may be obtained by Principal Component Analysis (PCA) over a large set of images depicting different faces. Any one face image can be considered as the combination of the standard faces, and the intelligent framing can only take effect when a person with the collected face data stands in front of a screen, so that the situation that a plurality of faces appear in the same scene and a plurality of data cannot be correctly judged is avoided. In addition, the human face is stored through a series of vectors (each characteristic face is a proportional value) instead of digital images, so that a lot of storage space can be saved, and the operation speed is high. The eye coordinate analysis module extracts three-dimensional space coordinates of eyes, namely the position and the distance of the eyes at the front depth sensor according to the target face information and the eye information. The visual field observation area analysis module determines a visual field observation area formed by the eye observation screen through the three-dimensional space coordinates of the eyes and the parameters of the mobile terminal, such as the position, the length and the width of the screen, the thickness of the mobile terminal, the wide-angle camera area of the rear wide-angle camera, the azimuth distance between the front depth sensor and the rear wide-angle camera. The shortest intelligent framing distance analysis module judges the shortest intelligent framing distance according to the intersection area of the wide-angle shooting area and the visual field observation area of the rear wide-angle camera.
As shown in fig. 2, the image is obtained by a common method, and in this method, due to the wide shooting range and the insufficient screen size, the object to be viewed is displayed in a reduced size, which is not consistent with the size of the object viewed by human eyes, and the visual viewing effect is not good. As shown in FIG. 3, in order to adopt the intelligent view-finding display effect diagram of the present invention, the size of the shot scenery is combined with the effect that the human eyes observe the size of the scenery, the view-finding amplification is carried out, the view-finding position can be changed along with the movement of the human, and the human eyes can intelligently display the truest scenery to achieve the most natural observation effect.

Claims (5)

1. A method for intelligent on-screen viewfinder display, comprising the steps of:
step 1, determining parameters of a mobile terminal;
determining the length and width of a screen of the mobile terminal, the thickness of the mobile terminal, a wide-angle camera shooting area of a rear wide-angle camera and the azimuth distance between a front depth sensor and the rear wide-angle camera;
step 2, recognizing the human face and obtaining the space coordinates of the target human eyes;
2.1) the front depth sensor collects a three-dimensional image of a human face in front of the front depth sensor;
2.2) analyzing the three-dimensional image of the face collected by the front depth sensor, and identifying the information of the target face and the information of eyes;
2.3) extracting three-dimensional space coordinates of the target human eyes, namely the direction and the distance of the eyes at the front depth sensor according to the target human face information and the eye information; establishing a space coordinate system by taking the screen as an xy reference surface, taking the upper right as a positive direction, taking the front depth sensor as a coordinate origin and taking the front of the screen as a positive direction of a z axis;
step 3, determining a visual field observation area;
modeling by three-dimensional space coordinates of the eyes of a target person, and projecting to the periphery of a screen according to the determined parameters of the mobile terminal, wherein the enclosed area is a visual field observation area;
step 4, intercepting a view observation area in the wide-angle camera shooting area;
in a region where the visual field observation region and the wide-angle shooting region are intersected, intercepting the visual field observation region behind the screen by using an L plane parallel to the xy reference plane, wherein the L plane is defined as a plane where a focus is located;
the visual field observation region extends outwards from the direction of eyes, when the visual field observation region is initially and completely located in the wide-angle shooting region, the distance from an L' plane parallel to an L plane to an xy reference plane at the moment is defined as the shortest intelligent viewing distance, the size and the position of the visual field observation region occupying the wide-angle shooting region at the moment are calculated in the L plane outside the shortest intelligent viewing distance, then an image with the same size as the visual field observation region at the moment is intercepted at the position of the L plane and projected on a screen, and the distance between the L plane and the xy reference plane is larger than or equal to the shortest intelligent viewing distance.
2. The method for on-screen smart viewfinder display of claim 1, wherein: and 2.3) taking the coordinate of the midpoint of the line connecting the two eyes as the three-dimensional space coordinate of the human eyes.
3. The method for on-screen smart viewfinder display of claim 1, wherein: and 2.2) adopting a face recognition method based on feature face PCA to recognize the target face information.
4. The utility model provides a mobile terminal that is used for intelligent framing to show on screen which characterized in that: comprises a hardware part and a software part;
the hardware part includes: the device comprises a front depth sensor, a screen and a rear wide-angle camera;
the front depth sensor is used for acquiring a three-dimensional image of a human face in front of the front depth sensor;
the software part comprises: the system comprises a face recognition analysis module, an eye coordinate analysis module, a visual field observation area analysis module and a shortest intelligent viewing distance analysis module;
the face recognition analysis module is used for recognizing target face information and eye information by analyzing a face three-dimensional image collected by the front depth sensor;
the eye coordinate analysis module is used for extracting three-dimensional space coordinates of eyes, namely the position and the distance of the eyes at the front depth sensor according to the target face information and the eye information;
the visual field observation area analysis module is used for determining a visual field observation area formed by an eye observation screen according to the three-dimensional space coordinate of the eyes and the parameters of the mobile terminal;
the shortest intelligent framing distance analysis module is used for judging the shortest intelligent framing distance according to the intersection area of the wide-angle camera shooting area of the rear wide-angle camera and the visual field observation area.
5. The mobile terminal for on-screen smart viewfinder display of claim 4, wherein: the front depth sensor is a TOF camera.
CN202010200994.7A 2020-03-20 2020-03-20 Method for intelligent framing display on screen and mobile terminal Active CN111246116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010200994.7A CN111246116B (en) 2020-03-20 2020-03-20 Method for intelligent framing display on screen and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010200994.7A CN111246116B (en) 2020-03-20 2020-03-20 Method for intelligent framing display on screen and mobile terminal

Publications (2)

Publication Number Publication Date
CN111246116A true CN111246116A (en) 2020-06-05
CN111246116B CN111246116B (en) 2022-03-11

Family

ID=70865328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010200994.7A Active CN111246116B (en) 2020-03-20 2020-03-20 Method for intelligent framing display on screen and mobile terminal

Country Status (1)

Country Link
CN (1) CN111246116B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017346A (en) * 2020-08-25 2020-12-01 杭州海康威视数字技术股份有限公司 Access control management method, access control terminal, access control system and storage medium
CN112052827A (en) * 2020-09-21 2020-12-08 陕西科技大学 Screen hiding method based on artificial intelligence technology

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN200989989Y (en) * 2006-08-09 2007-12-12 王清山 Display device
CN102411878A (en) * 2010-09-21 2012-04-11 索尼爱立信移动通信日本株式会社 Sensor-equipped display apparatus and electronic apparatus
CN102510425A (en) * 2011-10-28 2012-06-20 上海华勤通讯技术有限公司 Mobile terminal and live-action display method
US20140085334A1 (en) * 2012-09-26 2014-03-27 Apple Inc. Transparent Texting
CN103856590A (en) * 2012-12-05 2014-06-11 Lg电子株式会社 Glass type mobile terminal
CN103909875A (en) * 2014-04-11 2014-07-09 吴敏正 System for visualization of field outside vehicle shielding objects
CN104427123A (en) * 2013-09-09 2015-03-18 联想(北京)有限公司 Information processing method and electronic equipment
CN104639748A (en) * 2015-01-30 2015-05-20 深圳市中兴移动通信有限公司 Method and device for displaying desktop background based on borderless mobile phone
CN105025131A (en) * 2015-07-31 2015-11-04 瑞声声学科技(深圳)有限公司 Working method of mobile phone
CN105408856A (en) * 2013-07-25 2016-03-16 三星电子株式会社 Method for displaying and an electronic device thereof
US20170060235A1 (en) * 2015-08-24 2017-03-02 Ford Global Technologies, Llc Method of operating a vehicle head-up display
CN107368192A (en) * 2017-07-18 2017-11-21 歌尔科技有限公司 The outdoor scene observation procedure and VR glasses of VR glasses
US20180007314A1 (en) * 2008-07-14 2018-01-04 Musion Ip Ltd. Live Teleporting System and Apparatus
US20180041695A1 (en) * 2016-08-02 2018-02-08 Iplab Inc. Camera driving device and method for see-through displaying
WO2018076172A1 (en) * 2016-10-25 2018-05-03 华为技术有限公司 Image display method and terminal
CN108184024A (en) * 2018-01-05 2018-06-19 河海大学文天学院 Suitable for reminding the cell phone system and based reminding method of road ahead unsafe condition to cellie
CN108449446A (en) * 2017-10-27 2018-08-24 喻金鑫 Virtual transparent view-finder adjusts zoom
US20180343441A1 (en) * 2007-08-24 2018-11-29 Videa Llc Perspective altering display system
CN108919952A (en) * 2018-06-28 2018-11-30 郑州云海信息技术有限公司 A kind of control method, device, equipment and the storage medium of intelligent terminal screen
CN110324554A (en) * 2018-03-28 2019-10-11 北京富纳特创新科技有限公司 Video communication device and method

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN200989989Y (en) * 2006-08-09 2007-12-12 王清山 Display device
US20180343441A1 (en) * 2007-08-24 2018-11-29 Videa Llc Perspective altering display system
US20180007314A1 (en) * 2008-07-14 2018-01-04 Musion Ip Ltd. Live Teleporting System and Apparatus
CN102411878A (en) * 2010-09-21 2012-04-11 索尼爱立信移动通信日本株式会社 Sensor-equipped display apparatus and electronic apparatus
CN102510425A (en) * 2011-10-28 2012-06-20 上海华勤通讯技术有限公司 Mobile terminal and live-action display method
US20140085334A1 (en) * 2012-09-26 2014-03-27 Apple Inc. Transparent Texting
CN103856590A (en) * 2012-12-05 2014-06-11 Lg电子株式会社 Glass type mobile terminal
CN105408856A (en) * 2013-07-25 2016-03-16 三星电子株式会社 Method for displaying and an electronic device thereof
CN104427123A (en) * 2013-09-09 2015-03-18 联想(北京)有限公司 Information processing method and electronic equipment
CN103909875A (en) * 2014-04-11 2014-07-09 吴敏正 System for visualization of field outside vehicle shielding objects
CN104639748A (en) * 2015-01-30 2015-05-20 深圳市中兴移动通信有限公司 Method and device for displaying desktop background based on borderless mobile phone
CN105025131A (en) * 2015-07-31 2015-11-04 瑞声声学科技(深圳)有限公司 Working method of mobile phone
US20170060235A1 (en) * 2015-08-24 2017-03-02 Ford Global Technologies, Llc Method of operating a vehicle head-up display
US20180041695A1 (en) * 2016-08-02 2018-02-08 Iplab Inc. Camera driving device and method for see-through displaying
WO2018076172A1 (en) * 2016-10-25 2018-05-03 华为技术有限公司 Image display method and terminal
CN107368192A (en) * 2017-07-18 2017-11-21 歌尔科技有限公司 The outdoor scene observation procedure and VR glasses of VR glasses
CN108449446A (en) * 2017-10-27 2018-08-24 喻金鑫 Virtual transparent view-finder adjusts zoom
CN108184024A (en) * 2018-01-05 2018-06-19 河海大学文天学院 Suitable for reminding the cell phone system and based reminding method of road ahead unsafe condition to cellie
CN110324554A (en) * 2018-03-28 2019-10-11 北京富纳特创新科技有限公司 Video communication device and method
CN108919952A (en) * 2018-06-28 2018-11-30 郑州云海信息技术有限公司 A kind of control method, device, equipment and the storage medium of intelligent terminal screen

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋晓瑜 等: "集成成像三维显示系统的研究进展及优化方法", 《光学与光电技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017346A (en) * 2020-08-25 2020-12-01 杭州海康威视数字技术股份有限公司 Access control management method, access control terminal, access control system and storage medium
CN112017346B (en) * 2020-08-25 2023-08-18 杭州海康威视数字技术股份有限公司 Access control method, access control terminal, access control system and storage medium
CN112052827A (en) * 2020-09-21 2020-12-08 陕西科技大学 Screen hiding method based on artificial intelligence technology
CN112052827B (en) * 2020-09-21 2024-02-27 陕西科技大学 Screen hiding method based on artificial intelligence technology

Also Published As

Publication number Publication date
CN111246116B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN109064397B (en) Image stitching method and system based on camera earphone
US8330801B2 (en) Complexity-adaptive 2D-to-3D video sequence conversion
US11778403B2 (en) Personalized HRTFs via optical capture
JP2020523665A (en) Biological detection method and device, electronic device, and storage medium
US8314854B2 (en) Apparatus and method for image recognition of facial areas in photographic images from a digital camera
US20080278487A1 (en) Method and Device for Three-Dimensional Rendering
WO2021027537A1 (en) Method and apparatus for taking identification photo, device and storage medium
CN111429517A (en) Relocation method, relocation device, storage medium and electronic device
KR20150080728A (en) Acquisition System and Method of Iris image for iris recognition by using facial component distance
CN104584531A (en) Image processing apparatus and image display apparatus
US20150156475A1 (en) Method and Device for Implementing Stereo Imaging
CN112207821B (en) Target searching method of visual robot and robot
CN111246116B (en) Method for intelligent framing display on screen and mobile terminal
WO2022047828A1 (en) Industrial augmented reality combined positioning system
WO2024021742A1 (en) Fixation point estimation method and related device
CN112257552A (en) Image processing method, device, equipment and storage medium
CN109784215B (en) In-vivo detection method and system based on improved optical flow method
CN114022531A (en) Image processing method, electronic device, and storage medium
CN113610865A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP7113910B2 (en) Image processing method and apparatus, electronic equipment, and computer-readable storage medium
CN111385481A (en) Image processing method and device, electronic device and storage medium
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
CN112183431A (en) Real-time pedestrian number statistical method and device, camera and server
CN112052827B (en) Screen hiding method based on artificial intelligence technology
CN113805824B (en) Electronic device and method for displaying image on display apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant