CN107300967B - Intelligent navigation method, device, storage medium and terminal - Google Patents

Intelligent navigation method, device, storage medium and terminal Download PDF

Info

Publication number
CN107300967B
CN107300967B CN201710526234.3A CN201710526234A CN107300967B CN 107300967 B CN107300967 B CN 107300967B CN 201710526234 A CN201710526234 A CN 201710526234A CN 107300967 B CN107300967 B CN 107300967B
Authority
CN
China
Prior art keywords
terminal
user
eyes
screen
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710526234.3A
Other languages
Chinese (zh)
Other versions
CN107300967A (en
Inventor
梁昆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710526234.3A priority Critical patent/CN107300967B/en
Publication of CN107300967A publication Critical patent/CN107300967A/en
Application granted granted Critical
Publication of CN107300967B publication Critical patent/CN107300967B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3265Power saving in display device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an intelligent navigation method, an intelligent navigation device, a storage medium and a terminal; the method comprises the following steps: in the process of playing navigation data by the terminal, a first image of eyes of a user is obtained through a camera of the terminal, wherein the navigation data comprises image data and voice data, whether the user watches the screen of the terminal at present is judged according to the first image of the eyes of the user, and if the user does not watch the screen of the terminal at present, the playing of the image data is stopped, and the screen of the terminal is extinguished. The method and the device can automatically close the screen when the sight of the user does not pay attention to the screen of the terminal in the navigation process of the terminal, and can effectively reduce the power consumption of the terminal, thereby avoiding the waste of the power.

Description

Intelligent navigation method, device, storage medium and terminal
Technical Field
The invention relates to the field of mobile communication, in particular to an intelligent navigation method, an intelligent navigation device, a storage medium and a terminal.
Background
With the development of terminal technology, terminals have begun to change from simply providing telephony devices to a platform for running general-purpose software. The platform no longer aims at providing call management, but provides an operating environment including various application software such as call management, game and entertainment, office events, mobile payment and the like, and with a great deal of popularization, the platform has been deeply developed to the aspects of life and work of people.
With the continuous improvement of the living standard of people, vehicles become indispensable transportation tools in the life of people. The selection of the driving route is one of the important factors for determining the travel efficiency or the road satisfaction, and the vehicle navigation is particularly important in order to enable the vehicle to better serve people. At present, most terminal devices in the market integrate a global Positioning system (gps), so that users often use the terminal devices to navigate, and the navigation function of the terminal can provide convenience for the users, such as map query, route planning, real-time navigation and the like, thereby bringing great convenience for driving and traveling of people.
However, the inventor of the present invention found that, in the process of using the terminal device to perform car navigation, the terminal screen may still be kept in a normally-on state until the navigation is finished in order to enable the terminal to display the road condition information all the time while the terminal performs voice broadcast. In order to meet visual demands of users on entertainment, video and interaction, the screen design of the current terminal equipment is larger and larger, a large amount of electric quantity needs to be consumed by light emitting of a large screen, and the large screen also needs a large amount of electric quantity to support along with the dynamic and touch effects of the large screen, so that the power consumption of the terminal is greatly increased by a long and bright screen in the navigation process, and the use efficiency of the terminal is influenced.
Disclosure of Invention
The embodiment of the invention provides an intelligent navigation method, an intelligent navigation device, a storage medium and a terminal, which can reduce the power consumption of the terminal in the process of using the terminal for navigation.
In a first aspect, an embodiment of the present invention provides an intelligent navigation method, including:
in the process of playing navigation data by a terminal, acquiring a first image of eyes of a user through a camera of the terminal, wherein the navigation data comprises image data and voice data;
judging whether the user watches the screen of the terminal at present or not according to the first image of the eyes of the user;
and if the user is determined not to watch the screen of the terminal at present, stopping playing the image data and turning off the screen of the terminal.
In a second aspect, an embodiment of the present invention further provides an intelligent navigation apparatus, including: the device comprises a first image acquisition module, a first judgment module and a first processing module;
the first image acquisition module is used for acquiring a first image of eyes of a user through a camera of the terminal in the process of playing navigation data by the terminal, wherein the navigation data comprises image data and voice data;
the first judging module is used for judging whether the user watches the screen of the terminal at present according to the first image of the eyes of the user;
and the first processing module is used for stopping playing the image data and turning off the screen of the terminal when the first judging module judges that the image data is not played.
In a third aspect, the present invention further provides a storage medium, where the storage medium stores instructions, and the instructions are executed by a processor to implement the steps of the intelligent navigation method.
In a fourth aspect, an embodiment of the present invention further provides a terminal, including a memory and a processor, where the memory stores instructions, and the processor loads the instructions to execute the steps of the intelligent navigation method.
The method comprises the steps of firstly, acquiring a first image of eyes of a user through a camera of a terminal in the process of playing navigation data by the terminal, wherein the navigation data comprises image data and voice data, judging whether the user watches a screen of the terminal at present according to the first eye image of the user, and if not, stopping playing the image data and turning off the screen of the terminal. The method and the device can automatically close the screen when the sight of the user does not pay attention to the screen of the terminal in the navigation process of the terminal, and can effectively reduce the power consumption of the terminal, thereby avoiding the waste of the power.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a scene framework of an intelligent navigation method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of an intelligent navigation method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an eye image of an intelligent navigation method according to an embodiment of the present invention.
Fig. 4 is a schematic view of another eye image of the intelligent navigation method according to the embodiment of the present invention.
Fig. 5 is a schematic diagram of another eye image of the intelligent navigation method according to the embodiment of the present invention.
Fig. 6 is another schematic flow chart of the intelligent navigation method according to the embodiment of the present invention.
Fig. 7 is a schematic view of an application scenario of the intelligent navigation method according to the embodiment of the present invention.
Fig. 8 is a schematic structural diagram of an intelligent navigation device according to an embodiment of the present invention.
Fig. 9 is another schematic structural diagram of an intelligent navigation device according to an embodiment of the present invention.
Fig. 10 is a schematic structural diagram of an intelligent navigation device according to an embodiment of the present invention.
Fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present invention are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the invention and should not be taken as limiting the invention with regard to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present invention are described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the invention have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is to be understood that various steps and operations described hereinafter may be implemented in hardware.
The principles of the present invention are operational with numerous other general purpose or special purpose computing, communication environments or configurations. Examples of well known computing systems, environments, and configurations that may be suitable for use with the invention include, but are not limited to, hand-held telephones, personal computers, servers, multiprocessor systems, microcomputer-based systems, mainframe-based computers, and distributed computing environments that include any of the above systems or devices.
The details will be described below separately.
The embodiment will be described from the perspective of a smart navigation device, which may be specifically integrated in a terminal, where the terminal may be an electronic device with network function, such as a mobile interconnection network device (e.g., a smart phone, a tablet computer).
Referring to fig. 1, fig. 1 is a schematic view of a scene architecture of an intelligent navigation method according to an embodiment of the present invention, including a terminal and a server, where the terminal and the server establish a communication connection through the internet.
The user sends a navigation request to the server through the terminal, the navigation request comprises destination information, the server generates a navigation route according to the destination information and the current road condition information after receiving the navigation request, and the server sends the navigation route to the terminal. The terminal can send data to the server in a WEB mode, and can also send data to the server through a client program installed in the terminal. And the server receives the data sent by the terminal, processes the received data based on machine deep learning, and finally generates a navigation route. So that the user can go to the destination according to the changed navigation route.
Any of the following transmission protocols may be employed, but are not limited to, between the terminal and the server: HTTP (hypertext Transfer Protocol), FTP (File Transfer Protocol), P2P (Peer to Peer, Peer to Server and Peer), P2SP (Peer to Server & Peer), and the like.
The terminal may be a mobile terminal, such as a mobile phone, a tablet computer, or a conventional PC (personal computer), which is not limited in the embodiments of the present invention.
Referring to fig. 2, fig. 2 is a schematic flowchart of an intelligent navigation method according to an embodiment of the present invention, where the intelligent navigation method includes:
step S101, in the process that the terminal plays the navigation data, a first image of eyes of a user is obtained through a camera of the terminal.
In the embodiment of the invention, in the process of playing the navigation data by the terminal, the eye image of the user can be acquired by the terminal camera (such as an infrared camera). Wherein the eye image may include: an iris image region including an iris image, and a sclera image region including a sclera image. In addition, the navigation data includes image data and voice data.
Specifically, the embodiment may determine a face image from a current picture captured by the terminal camera, and then further extract an eye image from the face image. For example, a current image may be captured at each set time, a face image may be obtained from the image according to a face recognition technique, an eye image may be extracted from the face image, and feature information in the eye image may be further extracted. This step is to take a shot once every set time, for example, take an image once every 2 seconds, where each shot may be to take multiple images in succession, for example, 10 images at a time.
The face recognition is a biometric technology for identity recognition based on face feature information of a person. The method comprises the steps of collecting images or video streams containing human faces by using a camera or a pick-up head, automatically detecting and tracking the human faces in the images, and further carrying out a series of related technologies of the faces on the detected human faces. The face image contains abundant pattern features, such as histogram features, color features, template features, structural features, Haar features, and the like. In the embodiment, the useful information can be picked out, and the face detection can be realized by using the characteristics.
In one embodiment, the face recognition may be performed by using the Adaboost algorithm, which is a method for classification and combines some weaker classification methods to form a new strong classification method. In the process of face detection, an Adaboost algorithm is used for picking out some rectangular features (weak classifiers) which can represent the face most, the weak classifiers are constructed into a strong classifier according to a weighted voting mode, and then a plurality of strong classifiers obtained by training are connected in series to form a cascade-structured stacked classifier, so that the detection speed of the classifier is effectively improved.
In an embodiment, after the face image is obtained by using the face recognition technology, the face image may be further preprocessed. The image preprocessing of the human face is a process of processing the image and finally serving for feature extraction based on the human face detection result. The original face image acquired by the terminal is limited by various conditions and random interference, so that the original face image cannot be directly used, and the original face image needs to be subjected to image preprocessing such as gray correction, noise filtration and the like in the early stage of image processing. For the face image, the preprocessing process mainly includes light compensation, gray level transformation, histogram equalization, normalization, geometric correction, filtering, sharpening, and the like of the face image.
Step S102, judging whether the user watches the screen of the terminal at present according to the first image of the eyes of the user, if not, executing step S103, and if so, continuing to execute step S102.
The method for judging whether the user watches the terminal screen currently according to the eye image can be various, for example, the terminal can extract visual angle focus information in the eye image, the visual angle focus information can include a focus position, whether the focus of the user's eyes is located in the terminal screen is judged according to the focus position, and if yes, the user can be determined to watch the terminal screen currently. Wherein, the view focal information may further include: the distance between the eyes of the user and the terminal, the relative angle between the terminal and the eyes and the like.
Specifically, as shown in fig. 3, the eyeball 2 is located at the center of the eye 1, and the pupil 3 is located at the center of the eyeball 2. The eyes of the user can be divided into four areas, namely, an upper left area, an upper right area, a lower right area and a lower left area, and the terminal screen is predetermined to be located in the target area of the eyes, for example, the terminal determines the relative position of the eyes of the user and the terminal according to an image acquired by the camera, the terminal determines the preset area of the terminal located in the eyes according to the relative position, and the preset area is the target area. In addition, the target area may be set according to a user operation, which is not further limited in the present invention. The terminal acquires the focal position of the eyes of the user according to the visual angle focal information in the extracted eye image, further determines the area corresponding to the focal position, judges whether the area is the target area of the terminal screen located in the eyes, and if so, determines that the focal point of the eyes of the user is located in the terminal screen, namely the user currently watches the terminal screen.
For example, as shown in fig. 4, for example, it is predetermined that the target area of the terminal screen located in the eyes is a lower right area, and it can be determined from the eye image that the current eyeball is also located in the lower right area of the eyes, so as to determine that the user is currently watching the terminal screen. After the camera acquires the eye image of the user, the relative position of the iris on the whole eye and the relative position of the pupil on the eyeball can be judged by utilizing the difference between the color of the iris in the eye image information and the color and the lines of the other parts of the eye through the image recognition and positioning technology, and then the region of the eyeball in the eye is determined.
In one embodiment, as shown in fig. 5, the parts of the eyeball 2 in the eye image respectively fall into four regions, namely, the upper left region, the upper right region, the lower right region and the lower left region, and then the area of the eyeball 2 falling into the above regions can be respectively calculated, and the area of the eyeball 2 falling into the lower right region can be calculated to be the largest, so that the current lower right region of the eye can be determined.
In order to prevent an incorrect operation, in the embodiment of the present invention, before determining whether the user currently gazes at the terminal screen according to the eye image, a distance between the terminal and the user ' S eyes is measured to determine whether an operation of extracting visual angle focus information data of the user ' S eyes needs to be started, and when the terminal is beyond a certain range from the user ' S eyes, the execution of the extraction operation is abandoned, so that before step S102, the method may further include:
detecting the distance between the terminal and the eyes of the user through scanning infrared rays emitted by the camera, and when the detected distance between the terminal and the eyes of the user is within a preset threshold range, executing extraction operation by the camera to extract visual angle focus information data of the eyes of the user; and when the detected distance between the terminal and the eyes of the user exceeds a preset threshold value range, the camera abandons the execution of the extraction operation. The distance between the terminal and the eyes of the user can be detected by scanning infrared rays emitted by the camera, for example, the distance between the eyeball and the camera can be calculated by comparing the difference between the time when the scanning infrared rays emitted by the camera are emitted and the time when the scanning infrared rays return to the camera after hitting the eyeball with a constant value of 30 kilometers per second.
Step S103, stops playing the image data and turns off the screen of the terminal.
And if the user is judged not to watch the terminal screen at present, the playing of the image data can be stopped, and the terminal screen is extinguished, so that the electric quantity of the terminal is saved. It should be noted that after the terminal screen is turned off, the terminal may further continue to play the voice data in the navigation data to prompt the user to proceed.
The screen of the terminal may be divided into an active Light Emitting display device and a passive Light Emitting display device according to whether a display medium of the screen emits Light, the active Light Emitting display device is represented by an Organic Light-Emitting Diode (OLED), and the passive Light Emitting display device is represented by a Liquid Crystal Display (LCD), so that the screen brightness specifically includes panel display brightness or backlight brightness. In this embodiment, the terminal may save battery power of the terminal by turning off the terminal screen, and specifically, the terminal may turn off the display panel or turn off the backlight according to different screen types.
As can be seen from the above, the intelligent navigation method provided in the embodiment of the present invention can acquire the first image of the eyes of the user through the camera of the terminal in the process of playing the navigation data by the terminal, where the navigation data includes image data and voice data, and determine whether the user watches the screen of the terminal at present according to the first image of the eyes of the user, and if not, stop playing the image data and turn off the screen of the terminal. According to the invention, when the sight of the user does not pay attention to the terminal screen in the process of terminal navigation, the screen is automatically closed, so that the power consumption of the terminal can be effectively reduced, the waste of electric quantity is avoided, and the user experience is improved.
The intelligent navigation method of the present invention will be further explained below according to the description of the previous embodiment.
Referring to fig. 6, fig. 6 is another schematic flow chart of the intelligent navigation method according to the embodiment of the present invention, including:
step S201, in the process of playing the navigation data by the terminal, a first image of the eyes of the user is obtained through a camera of the terminal.
In the embodiment of the invention, in the process of playing the navigation data by the terminal, the eye image of the user can be acquired by the terminal camera (such as an infrared camera). The navigation data includes image data and voice data.
In consideration of the fact that in the actual use process, for example, a user uses a terminal to navigate when driving, at this time, the image acquired by the terminal camera may include a plurality of users, that is, a plurality of eye images are acquired by the terminal camera, and at this time, a target eye image corresponding to a driver in the plurality of eye images needs to be determined. Therefore, in an embodiment, after acquiring the user eye image through the camera of the terminal, the method may further include:
extracting characteristic information in the eye image;
matching the characteristic information with preset characteristic information;
if the matching is successful, the eye image is determined to be the target eye image, the step S202 is continuously executed, and if the matching is failed, the eye image can be ignored. The preset characteristic information in the embodiment of the invention can be preset as the characteristic information corresponding to the eye image of the driver.
In step S202, perspective focus information among first images of the user' S eyes is acquired.
In an embodiment of the present invention, the viewing angle focus information may include a focus position, and of course, in other embodiments, the viewing angle focus information may further include: the distance between the eyes of the user and the terminal, the relative angle between the terminal and the eyes and the like.
Step S203, determining whether the focus of the user' S eyes is located in the terminal screen according to the view angle focus information, if not, performing step S204, and if so, continuing to perform step S202.
Specifically, the eyes of the user can be divided into four areas, namely, an upper left area, an upper right area, a lower right area and a lower left area, and the terminal screen is predetermined to be located in the target area of the eyes, for example, the terminal determines the relative position between the eyes of the user and the terminal according to an image acquired by the camera, and the terminal determines the preset area where the terminal is located in the eyes according to the relative position, where the preset area is the target area. The terminal acquires the focal position of the eyes of the user according to the visual angle focal information in the extracted eye image, further determines the area corresponding to the focal position, judges whether the area is the target area of the terminal screen located in the eyes, and if so, determines that the focal point of the eyes of the user is located in the terminal screen.
And step S204, determining that the user does not watch the terminal screen currently.
In order to prevent the misoperation, the embodiment of the present invention may further measure the distance between the terminal and the user ' S eyes before the visual angle focus information in the eye image is extracted in step S202 to determine whether the visual angle focus information data extraction operation for the user ' S eyes needs to be started, and when the terminal is beyond a certain range from the user ' S eyes, the execution of the extraction operation may be abandoned.
Step S205 stops playing the image data and turns off the terminal screen.
And if the user is judged not to watch the terminal screen at present, the playing of the image data can be stopped, and the terminal screen is extinguished, so that the electric quantity of the terminal is saved. It should be noted that after the terminal screen is turned off, the terminal may further continue to play the voice data in the navigation data to prompt the user to proceed.
And step S206, continuously judging whether the user watches the terminal screen at present, if so, executing step S207, and if not, continuously executing step S206.
Specifically, the step of continuously determining whether the user is looking at the terminal screen may also include:
and acquiring a second image of the eyes of the user through the camera of the terminal, judging whether the user watches the screen of the terminal currently according to the second image of the eyes of the user, and if the user is confirmed to watch the screen of the terminal currently, continuing to execute the step S207.
Step S207, the screen of the terminal is lighted up and the playing of the image data is resumed.
In an embodiment, after determining that the user is currently gazing at the screen of the terminal, before lighting up the screen of the terminal and resuming playing of the image data, the method may further include:
acquiring the duration of a screen of a user concerned terminal;
judging whether the duration is longer than a preset duration or not;
and if so, executing the steps of lightening the screen of the terminal and recovering the playing of the image data.
As shown in fig. 7, fig. 7 is an application scenario diagram of the intelligent navigation method according to an embodiment of the present invention, and the method can determine whether the user focuses on the terminal screen currently, stop playing the image data and close the terminal screen to save power when the user does not focus on the terminal screen, and light the terminal screen and continue playing the image data when the user needs to view the navigation information.
As can be seen from the above, in the process of playing navigation data by the terminal, the intelligent navigation method provided in the embodiment of the present invention may obtain the first image of the eyes of the user through the camera of the terminal, obtain the visual angle focus information of the eyes of the user in the first image of the eyes of the user, determine whether the focus of the eyes of the user is located in the screen of the terminal according to the visual angle focus information, if not, determine that the user is not watching the screen of the terminal currently, stop playing the image data and extinguish the terminal screen, and then continue to determine whether the user is watching the screen of the terminal currently, and if so, light up the terminal screen and resume playing of the image data. According to the invention, in the process of terminal navigation, when the sight of the user does not pay attention to the terminal screen, the screen is automatically closed, and the screen is lightened again when the sight of the user continues paying attention to the terminal screen, so that the power consumption of the terminal can be effectively reduced, the waste of electric quantity is avoided, and the user experience is improved.
In order to better implement the intelligent navigation method provided by the embodiment of the invention, the embodiment of the invention also provides a device based on the intelligent navigation method. The meaning of the noun is the same as that in the intelligent navigation method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an intelligent navigation device 30 according to an embodiment of the present invention, including: a first image acquisition module 301, a first judgment module 302 and a first processing module 303;
the first image obtaining module 301 is configured to obtain a first image of eyes of a user through a camera of a terminal in a process that the terminal plays navigation data, where the navigation data includes image data and voice data;
the first judging module 302 is configured to judge whether the user watches the screen of the terminal currently according to the first image of the eyes of the user;
the first processing module 303 is configured to stop playing the image data and turn off the screen of the terminal when the first determining module 302 determines that the image data is not played.
In an embodiment, as shown in fig. 9, in the intelligent navigation device 30, the first determining module 302 specifically includes: an information acquisition sub-module 3021, a judgment sub-module 3022, and a determination sub-module 3023;
the information acquiring sub-module 3021 configured to acquire view angle focus information in a first image of an eye of a user, the view angle focus information including a focus position;
the judging submodule 3022 is configured to judge whether the focal point of the user's eye is located in the screen of the terminal according to the viewing angle focal point information;
the determining sub-module 3023 is configured to determine that the user does not watch the screen of the terminal currently when the determining sub-module 3022 determines that the user does not watch the screen of the terminal.
In an embodiment, the intelligent navigation device 30 may further include: the device comprises an information extraction module and a matching module;
the information extraction module is configured to, after the image acquisition module 301 acquires the first image of the eyes of the user through the camera of the terminal, extract feature information in the first image of the eyes of the user before the first judgment module 302 judges whether the user watches the screen of the terminal at present according to the first image of the eyes of the user, where the feature information includes iris information;
the matching module is used for matching the characteristic information with preset characteristic information;
the first determining module 302 is specifically configured to determine whether the user watches the terminal screen at present according to the first image of the eyes of the user when the matching module is successfully matched.
In an embodiment, as shown in fig. 10, the intelligent navigation device 30 may further include: a second image acquisition module 304, a second determination module 305, and a second processing module 306;
the second image obtaining module 304 is configured to obtain a second image of the eyes of the user through the camera of the terminal after the first processing module 303 stops playing the image data and turns off the screen of the terminal;
the second judging module 305 is configured to judge whether the user watches the screen of the terminal currently according to the second image of the eyes of the user;
the second processing module 306 is configured to, when the second determining module 305 determines yes, light up the screen of the terminal and resume playing of the image data.
In an embodiment, the intelligent navigation device 30 may further include: the time length obtaining module and the third judging module;
the duration obtaining module is configured to obtain a duration of the screen of the terminal that the user pays attention to after the second determining module 305 determines that the user currently gazes at the screen of the terminal and before the second processing module 306 lights up the screen of the terminal and resumes playing of the image data;
the third judging module is used for judging whether the duration is longer than the preset duration or not;
the second processing module 306 is specifically configured to, when the third determining module determines that the image data is not played, turn on the screen of the terminal.
As can be seen from the above, in the intelligent navigation device 30 provided in the embodiment of the present invention, the image obtaining module 301 may obtain, through the camera of the terminal, the first image of the eyes of the user during the process of playing the navigation data by the terminal, where the navigation data includes image data and voice data, the first determining module 302 determines whether the user currently gazes at the screen of the terminal according to the first image of the eyes of the user, and if it is determined that the user does not currently gaze at the screen of the terminal, the first processing module 303 stops playing the image data and turns off the screen of the terminal. The method and the device can automatically close the screen when the sight of the user does not pay attention to the screen of the terminal in the navigation process of the terminal, and can effectively reduce the power consumption of the terminal, thereby avoiding the waste of the power.
The invention also provides a storage medium, wherein the storage medium stores instructions, and the instructions are executed by the processor to realize the intelligent navigation method provided by the embodiment of the method.
The invention also provides a terminal which comprises a memory and a processor, wherein the memory stores instructions, and the processor loads the instructions to execute the intelligent navigation method provided by the implementation method.
In another embodiment of the present invention, a terminal is further provided, where the terminal may be a smart phone, a tablet computer, or the like. As shown in fig. 11, the terminal 400 includes a processor 401, a memory 402. The processor 401 is electrically connected to the memory 402.
The processor 401 is a control center of the terminal 400, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or loading an application stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the terminal.
In this embodiment, the processor 401 in the terminal 400 loads instructions corresponding to processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 runs the application programs stored in the memory 402, thereby implementing various functions:
in the process of playing navigation data by a terminal, acquiring a first image of eyes of a user through a camera of the terminal, wherein the navigation data comprises image data and voice data;
judging whether the user watches the screen of the terminal at present according to the first image of the eyes of the user;
and if the user is determined not to watch the screen of the terminal at present, stopping playing the image data and extinguishing the screen of the terminal.
In an embodiment, please refer to fig. 12, where fig. 12 is a schematic diagram of another terminal structure according to an embodiment of the present invention. The terminal 500 may include Radio Frequency (RF) circuitry 501, memory 502 including one or more computer-readable storage media, input unit 503, display unit 504, sensor 504, audio circuitry 506, Wireless Fidelity (WiFi) module 507, processor 508 including one or more processing cores, and power supply 509. Those skilled in the art will appreciate that the terminal structure shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The rf circuit 501 may be used for receiving and transmitting information, or receiving and transmitting signals during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In general, radio frequency circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
The memory 502 may be used to store applications and data. Memory 502 stores applications containing executable code. The application programs may constitute various functional modules. The processor 508 executes various functional applications and data processing by executing application programs stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal, etc.
The input unit 503 may be used to receive input numbers, character information, or user characteristic information (such as a fingerprint), and generate a keyboard, mouse, joystick, optical, or trackball signal input related to user setting and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves.
The display unit 504 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The display unit 504 may include a display panel.
The terminal may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. The terminal can also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described in detail herein.
The audio circuit 506 may provide an audio interface between the user and the terminal through a speaker, microphone. The audio circuit 506 can convert the received audio data into an electrical signal, transmit the electrical signal to a speaker, and convert the electrical signal into a sound signal to output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508 and then sent to, for example, another terminal via the rf circuit 501, or the audio data is output to the memory 502 for further processing.
Wireless fidelity (WiFi) belongs to short-distance wireless transmission technology, and the terminal can help the user to receive and send e-mail, browse web pages, access streaming media and the like through a wireless fidelity module 507, and provides wireless broadband internet access for the user. Although fig. 12 shows the wireless fidelity module 507, it is understood that it does not belong to the essential constitution of the terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by running or executing an application program stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the terminal. Optionally, processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 508.
The terminal also includes a power supply 509 (such as a battery) for powering the various components. Preferably, the power source may be logically connected to the processor 508 through a power management system, so that the power management system may manage charging, discharging, and power consumption management functions. The power supply 509 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown in fig. 12, the terminal may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
The processor 508 is also configured to implement the following functions: in the process of playing navigation data by the terminal, a first image of eyes of a user is obtained through a camera of the terminal, wherein the navigation data comprises image data and voice data, whether the user watches the screen of the terminal at present is judged according to the first image of the eyes of the user, and if the user does not watch the screen of the terminal at present, the playing of the image data is stopped, and the screen of the terminal is extinguished.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above modules may refer to the foregoing method embodiments, which are not described herein again.
It should be noted that, as one of ordinary skill in the art would understand, all or part of the steps in the various methods of the above embodiments may be implemented by relevant hardware instructed by a program, where the program may be stored in a computer-readable storage medium, such as a memory of a terminal, and executed by at least one processor in the terminal, and during the execution, the flow of the embodiments such as the information distribution method may be included. Among others, the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The intelligent navigation method, the intelligent navigation device, the intelligent navigation storage medium and the intelligent navigation terminal provided by the embodiment of the invention are described in detail, and each functional module can be integrated in one processing chip, or each module can exist independently and physically, or two or more modules can be integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. An intelligent navigation method is characterized by being applied to a terminal and comprising the following steps:
in the process of playing navigation data by the terminal, acquiring a first image of eyes of a user by a camera of the terminal, wherein the navigation data comprises image data and voice data;
judging whether the user watches the screen of the terminal at present according to the first image of the eyes of the user, and specifically comprising the following steps: acquiring visual angle focus information in a first image of the user's eyes, wherein the visual angle focus information comprises a focus position, a distance between the user's eyes and the terminal, and a relative angle between the user's eyes and the terminal; determining a preset area of the terminal, which is positioned in the eyes, from a plurality of preset areas as a target area according to the distance between the eyes of the user and the terminal and the relative angle between the eyes of the user and the terminal; determining the area of the current eyeball in the whole eye according to the focal position; judging whether the area of the current eyeball in the whole eye is consistent with the target area; if not, determining that the user does not watch the screen of the terminal currently;
if the user is determined not to watch the screen of the terminal at present, stopping playing the image data, turning off the screen of the terminal, and continuing to play the voice data;
acquiring a second image of the eyes of the user through a camera of the terminal;
judging whether the user watches the screen of the terminal at present according to the second image of the eyes of the user, and specifically comprising the following steps: acquiring visual angle focus information in a second image of the user's eye, wherein the visual angle focus information comprises a focus position, a distance between the user's eye and the terminal, and a relative angle between the user's eye and the terminal; determining a preset area of the terminal, which is positioned in the eyes, from a plurality of preset areas as a target area according to the distance between the eyes of the user and the terminal and the relative angle between the eyes of the user and the terminal; determining the area of the current eyeball in the whole eye according to the focal position; judging whether the area of the current eyeball in the whole eye is consistent with the target area; if so, determining that the user currently watches the screen of the terminal;
and if the user is confirmed to watch the screen of the terminal at present, the screen of the terminal is lightened, and the playing of the image data is recovered.
2. The intelligent navigation method according to claim 1, wherein after acquiring the first image of the eyes of the user through the camera of the terminal, it is determined whether the user is currently gazing at the screen of the terminal according to the first image of the eyes of the user, and the method further comprises:
extracting feature information in a first image of the user's eye, the feature information including iris information;
matching the characteristic information with preset characteristic information;
and if the matching is successful, executing a step of judging whether the user watches the screen of the terminal currently according to the first image of the eyes of the user.
3. The intelligent navigation method according to claim 1, wherein after determining that the user is currently gazing at the screen of the terminal, before lighting up the screen of the terminal and resuming the playing of the image data, the method further comprises:
acquiring the duration of a screen of a terminal concerned by a user;
judging whether the duration is longer than a preset duration or not;
and if so, lightening the screen of the terminal and recovering the playing of the image data.
4. An intelligent navigation device, comprising: the device comprises a first image acquisition module, a first judgment module, a first processing module, a second image acquisition module, a second judgment module and a second processing module;
the first image acquisition module is used for acquiring a first image of eyes of a user through a camera of the terminal in the process of playing navigation data by the terminal, wherein the navigation data comprises image data and voice data;
the first judging module is used for judging whether the user watches the screen of the terminal at present according to the first image of the eyes of the user;
the first judging module specifically comprises: the device comprises an information acquisition submodule, a judgment submodule and a determination submodule;
the information acquisition submodule is used for acquiring visual angle focus information in a first image of the user eyes, and the visual angle focus information comprises a focus position, the distance between the user eyes and the terminal and the relative angle between the user eyes and the terminal;
the judging submodule is used for determining a preset area of the terminal, which is positioned in eyes, from a plurality of preset areas as a target area according to the distance between the eyes of the user and the terminal and the relative angle between the eyes of the user and the terminal; determining the area of the current eyeball in the whole eye according to the focal position; judging whether the area of the current eyeball in the whole eye is consistent with the target area;
the determining submodule is used for determining that the user does not watch the screen of the terminal currently when the judging submodule judges that the screen is not watched;
the first processing module is used for stopping playing the image data and turning off a screen of the terminal and continuing to play the voice data when the first judging module judges that the image data is not played;
the second image acquisition module is used for acquiring a second image of the eyes of the user through the camera of the terminal after the first processing module stops playing the image data and extinguishes the screen of the terminal;
the second judging module is configured to judge whether the user watches the screen of the terminal currently according to the second image of the user's eye, and specifically includes: acquiring visual angle focus information in a second image of the user's eye, wherein the visual angle focus information comprises a focus position, a distance between the user's eye and the terminal, and a relative angle between the user's eye and the terminal; determining a preset area of the terminal, which is positioned in the eyes, from a plurality of preset areas as a target area according to the distance between the eyes of the user and the terminal and the relative angle between the eyes of the user and the terminal; determining the area of the current eyeball in the whole eye according to the focal position; judging whether the area of the current eyeball in the whole eye is consistent with the target area; if so, determining that the user currently watches the screen of the terminal;
and the second processing module is used for lighting up the screen of the terminal and recovering the playing of the image data when the second judgment module judges that the image data is positive.
5. The intelligent navigation device of claim 4, wherein the device further comprises: the device comprises an information extraction module and a matching module;
the information extraction module is used for extracting feature information in a first image of the eyes of the user before the first image of the eyes of the user is judged whether the user looks at the screen of the terminal or not according to the first image of the eyes of the user after the image acquisition module acquires the first image of the eyes of the user through a camera of the terminal, wherein the feature information comprises iris information;
the matching module is used for matching the characteristic information with preset characteristic information;
the first judging module is specifically used for judging whether the user watches the terminal screen at present according to the first image of the eyes of the user when the matching module is successfully matched.
6. The intelligent navigation device of claim 4, wherein the device further comprises: the time length obtaining module and the third judging module;
the duration acquisition module is configured to acquire a duration of the screen of the terminal that the user pays attention to after the second determination module determines that the user currently gazes at the screen of the terminal and before the second processing module lights up the screen of the terminal and resumes playing of the image data;
the third judging module is used for judging whether the duration is greater than a preset duration or not;
and the second processing module is specifically configured to, when the third determining module determines that the image data is the video data, light up a screen of the terminal and resume playing of the image data.
7. A storage medium storing instructions for execution by a processor to perform the steps of implementing the intelligent navigation method of any one of claims 1-3.
8. A terminal comprising a memory and a processor, the memory storing instructions, the processor loading the instructions to perform the intelligent navigation method of any of claims 1-3.
CN201710526234.3A 2017-06-30 2017-06-30 Intelligent navigation method, device, storage medium and terminal Expired - Fee Related CN107300967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710526234.3A CN107300967B (en) 2017-06-30 2017-06-30 Intelligent navigation method, device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710526234.3A CN107300967B (en) 2017-06-30 2017-06-30 Intelligent navigation method, device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN107300967A CN107300967A (en) 2017-10-27
CN107300967B true CN107300967B (en) 2020-07-07

Family

ID=60136089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710526234.3A Expired - Fee Related CN107300967B (en) 2017-06-30 2017-06-30 Intelligent navigation method, device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN107300967B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108184019A (en) * 2017-12-27 2018-06-19 努比亚技术有限公司 A kind of information reading device and method
CN108427938A (en) * 2018-03-30 2018-08-21 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN109194825A (en) * 2018-08-15 2019-01-11 北京小米移动软件有限公司 Early warning and reminding method and terminal
CN109274810A (en) * 2018-09-29 2019-01-25 努比亚技术有限公司 A kind of terminal processing method, terminal and computer readable storage medium
CN109259434B (en) * 2018-11-17 2022-05-10 北京华谛盟家具有限公司 Intelligent U-shaped conference table
CN111077989B (en) * 2019-05-27 2023-11-24 广东小天才科技有限公司 Screen control method based on electronic equipment and electronic equipment
CN110555133A (en) * 2019-09-04 2019-12-10 安徽星光璀璨文化发展有限公司 Intelligent advertisement promotion system for online taxi appointment
CN110888247A (en) * 2019-11-20 2020-03-17 Tcl华星光电技术有限公司 Display panel
CN113204281A (en) * 2021-03-22 2021-08-03 闻泰通讯股份有限公司 Method and device for dynamically adjusting terminal screen brightness, electronic equipment and storage medium
CN113596345B (en) * 2021-08-09 2023-01-17 荣耀终端有限公司 Parameter adjustment method, display control method, electronic device, and medium
CN113628579A (en) * 2021-08-09 2021-11-09 深圳市优聚显示技术有限公司 LED energy-saving display method, LED display screen system and LCD display equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN102736722A (en) * 2011-03-31 2012-10-17 国基电子(上海)有限公司 An electronic device with double display screens and a method for controlling screen display thereof
WO2013063813A1 (en) * 2011-11-06 2013-05-10 Liv Runchun Energy-saving vehicle navigator apparatus
CN103677215A (en) * 2014-01-02 2014-03-26 重庆市科学技术研究院 Power-saving control method for screen of intelligent device
CN104238948A (en) * 2014-09-29 2014-12-24 广东欧珀移动通信有限公司 Method for illumining screen of smart watch and smart watch
CN106293100A (en) * 2016-08-24 2017-01-04 上海与德通讯技术有限公司 The determination method of sight line focus and virtual reality device in virtual reality device
CN106557150A (en) * 2016-11-08 2017-04-05 北京小米移动软件有限公司 Terminal control method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101504291A (en) * 2009-02-25 2009-08-12 江苏华科导航科技有限公司 Navigation apparatus capable of simultaneously performing navigation and CMMB broadcast, and its working method
CN101860704B (en) * 2009-04-10 2013-05-08 深圳Tcl新技术有限公司 Display device for automatically closing image display and realizing method thereof
CN103123537B (en) * 2011-11-21 2016-04-20 国基电子(上海)有限公司 Electronic display unit and electricity saving method thereof
CN105912109A (en) * 2016-04-06 2016-08-31 众景视界(北京)科技有限公司 Screen automatic switching device of head-wearing visual device and head-wearing visual device
CN106197462A (en) * 2016-07-01 2016-12-07 上海卓易云汇智能技术有限公司 Air navigation aid and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893934A (en) * 2010-06-25 2010-11-24 宇龙计算机通信科技(深圳)有限公司 Method and device for intelligently adjusting screen display
CN102736722A (en) * 2011-03-31 2012-10-17 国基电子(上海)有限公司 An electronic device with double display screens and a method for controlling screen display thereof
WO2013063813A1 (en) * 2011-11-06 2013-05-10 Liv Runchun Energy-saving vehicle navigator apparatus
CN103677215A (en) * 2014-01-02 2014-03-26 重庆市科学技术研究院 Power-saving control method for screen of intelligent device
CN104238948A (en) * 2014-09-29 2014-12-24 广东欧珀移动通信有限公司 Method for illumining screen of smart watch and smart watch
CN106293100A (en) * 2016-08-24 2017-01-04 上海与德通讯技术有限公司 The determination method of sight line focus and virtual reality device in virtual reality device
CN106557150A (en) * 2016-11-08 2017-04-05 北京小米移动软件有限公司 Terminal control method and device

Also Published As

Publication number Publication date
CN107300967A (en) 2017-10-27

Similar Documents

Publication Publication Date Title
CN107300967B (en) Intelligent navigation method, device, storage medium and terminal
CN110910872B (en) Voice interaction method and device
CN108427873B (en) Biological feature identification method and mobile terminal
CN107241552B (en) Image acquisition method, device, storage medium and terminal
CN103729636A (en) Method and device for cutting character and electronic device
CN108958587B (en) Split screen processing method and device, storage medium and electronic equipment
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN108710458A (en) A kind of split screen control method and terminal device
CN108616448A (en) A kind of the path recommendation method and mobile terminal of Information Sharing
CN112532885B (en) Anti-shake method and device and electronic equipment
CN110572716A (en) Multimedia data playing method, device and storage medium
CN110930964B (en) Display screen brightness adjusting method and device, storage medium and terminal
CN110493452A (en) A kind of management method and device of finger-print switch
CN107562917B (en) User recommendation method and device
CN110890969B (en) Method and device for mass-sending message, electronic equipment and storage medium
CN110020690B (en) Cheating behavior detection method, device and storage medium
CN107895108B (en) Operation management method and mobile terminal
CN109040427B (en) Split screen processing method and device, storage medium and electronic equipment
CN111862972A (en) Voice interaction service method, device, equipment and storage medium
CN103699894A (en) Information card scanning prompt method, device and terminal equipment
CN110472520B (en) Identity recognition method and mobile terminal
CN114140655A (en) Image classification method and device, storage medium and electronic equipment
CN114140864B (en) Trajectory tracking method and device, storage medium and electronic equipment
CN109902606B (en) Operation method and terminal equipment
CN108521545B (en) Image adjusting method and device based on augmented reality, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200707