CN112860059A - Image identification method and device based on eyeball tracking and storage medium - Google Patents
Image identification method and device based on eyeball tracking and storage medium Download PDFInfo
- Publication number
- CN112860059A CN112860059A CN202110023384.9A CN202110023384A CN112860059A CN 112860059 A CN112860059 A CN 112860059A CN 202110023384 A CN202110023384 A CN 202110023384A CN 112860059 A CN112860059 A CN 112860059A
- Authority
- CN
- China
- Prior art keywords
- virtual image
- reference points
- model
- area
- marking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses an image identification method, equipment and a storage medium based on eyeball tracking, wherein the identification method comprises the following steps of S1: acquiring image data, and establishing a virtual image model according to the image data; s2: acquiring eye movement data, judging whether the watching time of any coordinate in the virtual image model exceeds a preset time according to the eye movement data, if so, marking the coordinate as a reference point and recording the marking time for the reference point; s3: connecting the plurality of reference points in sequence according to the marking time of the reference points, judging whether the connecting lines among the plurality of reference points form a closed area, and if so, marking the closed area as the reference area; s4: and carrying out image recognition on the reference region in the virtual image model to generate feature information of the reference region. The invention can improve the interactivity of the eyeball tracking technology and improve the use experience of the user.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image recognition method and apparatus based on eye tracking, and a storage medium.
Background
Currently, eyeball tracking is a technology for estimating a sight direction or an eye gaze position after modeling and simulating eyeball motion information by capturing the eyeball motion information, and the reaction of a human to a certain object can be better understood through the eyeball tracking technology; however, the existing eye tracking technology is generally used for reflecting the position of a point of interest when a user sees a certain object through eye tracking, and the process is unidirectional, so that the existing eye tracking technology has poor interactivity. In addition, the existing eyeball tracking technology only converts the eyeball motion trajectory of the user into a moving path in a computer language, and cannot read the characteristics of the object which the user is currently paying attention to, so that many detailed parts in the image are ignored, and the use experience of the user is poor.
Disclosure of Invention
In order to overcome the disadvantages of the prior art, an object of the present invention is to provide an image recognition method based on eye tracking, which can improve the interactivity of the eye tracking technology and improve the user experience.
Another object of the present invention is to provide an electronic device.
It is a further object of the present invention to provide a storage medium.
One of the purposes of the invention is realized by adopting the following technical scheme:
an image identification method based on eyeball tracking comprises the following steps:
step S1: acquiring image data, and establishing a virtual image model according to the image data;
step S2: acquiring eye movement data, judging whether the watching time of any coordinate in the virtual image model exceeds a preset time according to the eye movement data, if so, marking the coordinate as a reference point and recording the marking time for the reference point;
step S3: connecting the plurality of reference points in sequence according to the marking time of the reference points, judging whether the connecting lines among the plurality of reference points form a closed area, and if so, marking the closed area as the reference area;
step S4: and carrying out image recognition on the reference region in the virtual image model to generate feature information of the reference region.
Further, the image data includes two-dimensional image data and three-dimensional image data, a two-dimensional virtual image model is established according to the two-dimensional image data, and a three-dimensional virtual image model is established according to the three-dimensional image data.
Further, the eye movement data also comprises an eye movement path, and a moving cursor matched with the eye movement path is created in the virtual image model according to the eye movement path.
Further, the method for determining whether the connection line between the reference points constitutes a closed region in step S3 includes:
judging whether two reference points with different marking time are overlapped with each other or not, if so, determining that a connection line between the reference points is always in a closed area; or the like, or, alternatively,
and judging whether the shortest distance between two reference points with different marking time is smaller than a preset distance or not, and if so, determining that a connection line between the reference points is bound to have a closed area.
Further, before generating the reference point, the method further includes: judging whether an amplification instruction is received or not, if so, amplifying the specified area in the virtual image model according to the amplification instruction, and watching the amplified area to generate a reference point.
Further, still include: and judging whether a cancel instruction is received or not in real time, and if the cancel instruction is received, deleting the newly generated reference point according to the marking time.
Further, the method for performing image recognition on the reference region in the virtual image model in step S4 includes:
and identifying the outline of all objects in the reference area in the virtual image model, comparing the outline of the objects with the outline of the standard model of each object in a preset database, and determining the type of the objects in the reference area to generate corresponding characteristic information when the outline of the objects in the reference area is matched with any standard model.
Furthermore, the standard model is bound with pre-stored information, and when the object in the reference area is judged to be matched with any standard model, the pre-stored information is displayed.
The second purpose of the invention is realized by adopting the following technical scheme:
an electronic device comprises a processor, a memory and a computer program stored on the memory and operable on the processor, wherein the processor implements the eyeball tracking-based image recognition method when executing the computer program.
The third purpose of the invention is realized by adopting the following technical scheme:
a storage medium having stored thereon a computer program which, when executed, implements the above-described eye tracking-based image recognition method.
Compared with the prior art, the invention has the beneficial effects that:
a reference area is marked out in a virtual image model through an eyeball tracking technology, and image features in the reference area are identified to generate feature information corresponding to the reference area, so that the interactivity of a user using the technology is improved, the user can read more feature information in the process of watching the image, even know details which are easily ignored by naked eyes in the image, and the use experience of the user is improved.
Drawings
Fig. 1 is a schematic flow chart of an image recognition method based on eye tracking according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example one
The embodiment provides an image recognition method based on eyeball tracking, and by using the method, more detail features which cannot be noticed by naked eyes in an image can be read, the interaction degree of an eyeball tracking technology can be improved, and the use experience and the interestingness of a user are improved.
As shown in fig. 1, the method for identifying an image based on eye tracking of the present embodiment specifically includes the following steps:
step S1: and acquiring image data, and establishing a virtual image model according to the image data.
The image data comprises two-dimensional image data and three-dimensional image data, a two-dimensional virtual image model is established according to the two-dimensional image data, and a three-dimensional virtual image model is established according to the three-dimensional image data. The image data may be image data of a single object or a combined image in which a plurality of objects are combined together; the image data is imported into the appointed software, and the appointed software can generate the corresponding virtual image model, so that the user can check various detailed characteristics of the virtual image model through an eyeball tracking technology.
Step S2: and acquiring eye movement data, judging whether the watching time of any coordinate in the virtual image model exceeds a preset time according to the eye movement data, if so, marking the coordinate as a reference point and recording the marking time for the reference point.
The eye movement data are generated by an eye movement instrument, and the user can wear the eye movement instrument and check the virtual image model to generate the corresponding eye movement data. When the virtual image model is a three-dimensional model, the user can view the three-dimensional virtual image model by using the eye tracker combined with the vr technology, so as to perform feature recognition on any position of the three-dimensional virtual image model.
The eye movement data comprises an eye movement path and a watching duration, wherein the eye movement path is formed by connecting a plurality of watching coordinate points and is a movement path of eyeballs of the user. The watching duration is the time of the eyeballs of the user staying on any watching coordinate; when the watching time of the eyeballs of the user staying on a certain coordinate of the virtual image model exceeds the preset time length, marking the coordinate as a reference point; and recording the current time when the reference points are marked while marking the reference points, wherein the current time is the marking time, and the time sequence generated by different reference points can be determined according to the time. In this embodiment, a moving cursor matched with the eye movement path may be created in the virtual image model according to the eye movement path, that is, the moving cursor may move along with the movement of the eyeballs of the user, so that the user may clearly know whether the current gaze coordinate is matched with the gaze position of the user.
Step S3: and connecting the plurality of reference points in sequence according to the marking time of the reference points, judging whether the connection lines among the plurality of reference points form a closed area, and if so, marking the closed area as the reference area.
Since the marking times of the different reference points are generated according to the user's gaze sequence, the marking times of the different reference points are necessarily different, and the reference block is determined according to the marking times of the different reference points in the present embodiment. The method for determining whether the connection line between the reference points forms the closed region in the embodiment includes:
the method comprises the following steps: judging whether two reference points with different marking times coincide with each other or not, and if the two reference points with different marking times coincide with each other, considering that a connection line between the reference points necessarily has a closed area;
the second method comprises the following steps: judging whether connecting lines among the reference points are intersected or not, if the connecting lines are intersected, taking the intersected point as a new reference point to enable the intersected point to be surrounded with other reference points to form a closed area;
the third method comprises the following steps: judging whether the shortest distance between two reference points with different marking time is smaller than a preset distance or not, if so, considering that the two reference points are approximately coincident, fusing the two reference points with the shortest distance smaller than the preset distance together, and enabling connecting lines of the two reference points and other reference points to enclose a closed area.
In the embodiment, the image in the reference area is identified to identify more detailed features in the reference area.
In this embodiment, before generating the reference point, it is further required to determine whether an amplification instruction is received, where the amplification instruction may be generated by an external device such as a mouse, a keyboard, a remote control device, or an eyeball tracking technology; an amplifying button is arranged in the software, a user selects the amplifying button through external equipment to generate an amplifying instruction, then the external equipment is used for selecting an area to be amplified, and the area designated by the user is amplified; or, after the user watches the amplifying button through the eyeball tracking technology for more than a preset time, the amplifying function can be considered to be triggered, and an amplifying instruction can be generated; and then, prompting the user to watch any position in the virtual image model by software, and ensuring that the watching time length is greater than a preset time length, wherein the software can use the coordinate position watched by the user as an amplification origin, amplify the virtual image model by a specified multiple, and after amplifying the virtual image model, watch the amplified region to generate a reference point, so that the accuracy of marking the reference point in the virtual image model can be improved.
Meanwhile, whether a cancel instruction is received can be judged in real time, the cancel instruction can be generated in an external device or eyeball tracking mode, a user can select a cancel button on software through the external device or generate the cancel instruction in a mode of watching the cancel button for a long time, and when the cancel instruction is received, the newly generated reference point is deleted according to the marking time.
In this embodiment, in order to better let the user know the gazing duration, a countdown clock is generated in the gazing process of the user, the countdown clock starts to count down when the gazing starts, and after the countdown of the clock is finished, the gazing duration of the user must exceed a preset duration, so that a reference point can be generated or an amplification and cancellation button can be triggered.
Step S4: and carrying out image recognition on the reference region in the virtual image model to generate feature information of the reference region.
In this embodiment, a preset database is provided, in which a large number of standard models of different objects are stored in advance, and a large number of data such as object types, object names, object outline images, object purposes, and the like of the standard models are also stored. After a reference area is determined in the virtual image model, carrying out outline recognition on all objects in the reference area in the virtual image model through an image recognition technology, carrying out outline comparison on the outline of the object and a standard model of each object in a preset database, determining the type of the object in the reference area when the outline of the object in the reference area is matched with any standard model, and displaying relevant data of the standard model as characteristic information. In addition, when the object in the reference area is judged to be a human, the human face in the reference area can be identified according to the human face identification technology, and corresponding human face characteristic information is generated and displayed; meanwhile, the standard model can be bound with pre-stored information, the pre-stored information can be various information related or unrelated to the object, and when the object in the reference area is judged to be matched with any standard model, the pre-stored information corresponding to the standard model is displayed. In this embodiment, the display mode of the feature information may be displayed in the software in a pop-up window mode, so that the user can view the feature information of the reference area while viewing the virtual image model, thereby improving the user experience.
Example two
The embodiment provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, wherein the processor implements the image recognition method based on eye tracking in the first embodiment when executing the computer program; in addition, the present embodiment also provides a storage medium on which a computer program is stored, the computer program implementing the above-mentioned image recognition method based on eye tracking when executed.
The apparatus and the storage medium in this embodiment are based on two aspects of the same inventive concept, and the method implementation process has been described in detail in the foregoing, so that those skilled in the art can clearly understand the structure and implementation process of the system in this embodiment according to the foregoing description, and for the sake of brevity of the description, details are not repeated here.
The above embodiments are only preferred embodiments of the present invention, and the protection scope of the present invention is not limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are within the protection scope of the present invention.
Claims (10)
1. An image recognition method based on eyeball tracking is characterized by comprising the following steps:
step S1: acquiring image data, and establishing a virtual image model according to the image data;
step S2: acquiring eye movement data, judging whether the watching time of any coordinate in the virtual image model exceeds a preset time according to the eye movement data, if so, marking the coordinate as a reference point and recording the marking time for the reference point;
step S3: connecting the plurality of reference points in sequence according to the marking time of the reference points, judging whether the connecting lines among the plurality of reference points form a closed area, and if so, marking the closed area as the reference area;
step S4: and carrying out image recognition on the reference region in the virtual image model to generate feature information of the reference region.
2. The eyeball tracking-based image recognition method according to claim 1, wherein the image data includes two-dimensional image data and three-dimensional image data, a two-dimensional virtual image model is created from the two-dimensional image data, and a three-dimensional virtual image model is created from the three-dimensional image data.
3. The method according to claim 1, wherein the eye movement data further includes an eye movement path, and a moving cursor matching the eye movement path is created in the virtual image model according to the eye movement path.
4. The method for image recognition based on eye tracking according to claim 1, wherein the step S3 of determining whether the connection line between the plurality of reference points constitutes a closed region comprises:
judging whether two reference points with different marking time are overlapped with each other or not, if so, determining that a connection line between the reference points is always in a closed area; or the like, or, alternatively,
and judging whether the shortest distance between two reference points with different marking time is smaller than a preset distance or not, and if so, determining that a connection line between the reference points is bound to have a closed area.
5. The method for recognizing an image based on eye tracking according to claim 1, further comprising, before generating the reference point: judging whether an amplification instruction is received or not, if so, amplifying the specified area in the virtual image model according to the amplification instruction, and watching the amplified area to generate a reference point.
6. The method for image recognition based on eye tracking according to claim 1, further comprising: and judging whether a cancel instruction is received or not in real time, and if the cancel instruction is received, deleting the newly generated reference point according to the marking time.
7. The method for image recognition based on eye tracking according to claim 1, wherein the step S4 is performed by performing image recognition on the reference region in the virtual image model by:
and identifying the outline of all objects in the reference area in the virtual image model, comparing the outline of the objects with the outline of the standard model of each object in a preset database, and determining the type of the objects in the reference area to generate corresponding characteristic information when the outline of the objects in the reference area is matched with any standard model.
8. The eyeball tracking-based image recognition method according to claim 7, wherein prestored information is bound to the standard model, and when it is determined that the object in the reference region matches any one of the standard models, the prestored information is displayed.
9. An electronic device, comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method for image recognition based on eye tracking according to any one of claims 1 to 8 when executing the computer program.
10. A storage medium having stored thereon a computer program which, when executed, implements the eye tracking-based image recognition method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110023384.9A CN112860059A (en) | 2021-01-08 | 2021-01-08 | Image identification method and device based on eyeball tracking and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110023384.9A CN112860059A (en) | 2021-01-08 | 2021-01-08 | Image identification method and device based on eyeball tracking and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112860059A true CN112860059A (en) | 2021-05-28 |
Family
ID=76005375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110023384.9A Pending CN112860059A (en) | 2021-01-08 | 2021-01-08 | Image identification method and device based on eyeball tracking and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112860059A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408406A (en) * | 2021-06-17 | 2021-09-17 | 杭州嘉轩信息科技有限公司 | Sight tracking method and system |
CN113419631A (en) * | 2021-06-30 | 2021-09-21 | 珠海云洲智能科技股份有限公司 | Formation control method, electronic device and storage medium |
CN116246332A (en) * | 2023-05-11 | 2023-06-09 | 广东工业大学 | Eyeball tracking-based data labeling quality detection method, device and medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009019760A1 (en) * | 2007-08-07 | 2009-02-12 | Osaka Electro-Communication University | Moving object detector, moving object detection method, pointing device, computer program and storage medium |
CN102930334A (en) * | 2012-10-10 | 2013-02-13 | 北京凯森世纪科技发展有限公司 | Video recognition counter for body silhouette |
US20130054622A1 (en) * | 2011-08-29 | 2013-02-28 | Amit V. KARMARKAR | Method and system of scoring documents based on attributes obtained from a digital document by eye-tracking data analysis |
CN106056092A (en) * | 2016-06-08 | 2016-10-26 | 华南理工大学 | Gaze estimation method for head-mounted device based on iris and pupil |
CN106127149A (en) * | 2016-06-22 | 2016-11-16 | 南京大学 | A kind of flow chart groups of method and apparatus of stroke based on eye movement data |
WO2017051025A1 (en) * | 2015-09-25 | 2017-03-30 | Itu Business Development A/S | A computer-implemented method of recovering a visual event |
CN108549874A (en) * | 2018-04-19 | 2018-09-18 | 广州广电运通金融电子股份有限公司 | A kind of object detection method, equipment and computer readable storage medium |
CN108874148A (en) * | 2018-07-16 | 2018-11-23 | 北京七鑫易维信息技术有限公司 | A kind of image processing method and device |
CN109086726A (en) * | 2018-08-10 | 2018-12-25 | 陈涛 | A kind of topography's recognition methods and system based on AR intelligent glasses |
CN109587344A (en) * | 2018-12-28 | 2019-04-05 | 北京七鑫易维信息技术有限公司 | Call control method, device, mobile terminal and medium based on mobile terminal |
-
2021
- 2021-01-08 CN CN202110023384.9A patent/CN112860059A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009019760A1 (en) * | 2007-08-07 | 2009-02-12 | Osaka Electro-Communication University | Moving object detector, moving object detection method, pointing device, computer program and storage medium |
US20130054622A1 (en) * | 2011-08-29 | 2013-02-28 | Amit V. KARMARKAR | Method and system of scoring documents based on attributes obtained from a digital document by eye-tracking data analysis |
CN102930334A (en) * | 2012-10-10 | 2013-02-13 | 北京凯森世纪科技发展有限公司 | Video recognition counter for body silhouette |
WO2017051025A1 (en) * | 2015-09-25 | 2017-03-30 | Itu Business Development A/S | A computer-implemented method of recovering a visual event |
CN106056092A (en) * | 2016-06-08 | 2016-10-26 | 华南理工大学 | Gaze estimation method for head-mounted device based on iris and pupil |
CN106127149A (en) * | 2016-06-22 | 2016-11-16 | 南京大学 | A kind of flow chart groups of method and apparatus of stroke based on eye movement data |
CN108549874A (en) * | 2018-04-19 | 2018-09-18 | 广州广电运通金融电子股份有限公司 | A kind of object detection method, equipment and computer readable storage medium |
CN108874148A (en) * | 2018-07-16 | 2018-11-23 | 北京七鑫易维信息技术有限公司 | A kind of image processing method and device |
CN109086726A (en) * | 2018-08-10 | 2018-12-25 | 陈涛 | A kind of topography's recognition methods and system based on AR intelligent glasses |
CN109587344A (en) * | 2018-12-28 | 2019-04-05 | 北京七鑫易维信息技术有限公司 | Call control method, device, mobile terminal and medium based on mobile terminal |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113408406A (en) * | 2021-06-17 | 2021-09-17 | 杭州嘉轩信息科技有限公司 | Sight tracking method and system |
CN113408406B (en) * | 2021-06-17 | 2023-02-28 | 杭州嘉轩信息科技有限公司 | Sight tracking method and system |
CN113419631A (en) * | 2021-06-30 | 2021-09-21 | 珠海云洲智能科技股份有限公司 | Formation control method, electronic device and storage medium |
CN113419631B (en) * | 2021-06-30 | 2022-08-09 | 珠海云洲智能科技股份有限公司 | Formation control method, electronic device and storage medium |
CN116246332A (en) * | 2023-05-11 | 2023-06-09 | 广东工业大学 | Eyeball tracking-based data labeling quality detection method, device and medium |
CN116246332B (en) * | 2023-05-11 | 2023-07-28 | 广东工业大学 | Eyeball tracking-based data labeling quality detection method, device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11087551B2 (en) | Systems and methods for attaching synchronized information between physical and virtual environments | |
CN112860059A (en) | Image identification method and device based on eyeball tracking and storage medium | |
US11127210B2 (en) | Touch and social cues as inputs into a computer | |
CN107810465B (en) | System and method for generating a drawing surface | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
US20230274513A1 (en) | Content creation in augmented reality environment | |
Petersen et al. | Cognitive augmented reality | |
US20130174213A1 (en) | Implicit sharing and privacy control through physical behaviors using sensor-rich devices | |
US20220179609A1 (en) | Interaction method, apparatus and device and storage medium | |
US11436828B1 (en) | Insurance inventory and claim generation | |
CN108615159A (en) | Access control method and device based on blinkpunkt detection | |
KR20160086840A (en) | Persistent user identification | |
CN107210830B (en) | Object presenting and recommending method and device based on biological characteristics | |
KR20110104686A (en) | Marker size based interaction method and augmented reality system for realizing the same | |
EP3005034A1 (en) | Tagging using eye gaze detection | |
KR20140026629A (en) | Dynamic gesture recognition process and authoring system | |
US20220300066A1 (en) | Interaction method, apparatus, device and storage medium | |
US20230024586A1 (en) | Learning device, learning method, and recording medium | |
US20210117040A1 (en) | System, method, and apparatus for an interactive container | |
CN111512370A (en) | Voice tagging of video while recording | |
US10788887B2 (en) | Image generation program, image generation device, and image generation method | |
KR20190067433A (en) | Method for providing text-reading based reward advertisement service and user terminal for executing the same | |
CN106462869B (en) | Apparatus and method for providing advertisement using pupil tracking | |
JP6318289B1 (en) | Related information display system | |
CN112860060B (en) | Image recognition method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 238, room 406, 1 Yichuang street, Huangpu District, Guangzhou, Guangdong 510700 Applicant after: Guangzhou langguo Electronic Technology Co.,Ltd. Address before: Room 238, room 406, 1 Yichuang street, Huangpu District, Guangzhou, Guangdong 510700 Applicant before: GUANGZHOU LANGO ELECTRONIC SCIENCE & TECHNOLOGY Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210528 |