CN112668589A - Pairing method and apparatus for head-mounted electronic device - Google Patents

Pairing method and apparatus for head-mounted electronic device Download PDF

Info

Publication number
CN112668589A
CN112668589A CN202011638457.7A CN202011638457A CN112668589A CN 112668589 A CN112668589 A CN 112668589A CN 202011638457 A CN202011638457 A CN 202011638457A CN 112668589 A CN112668589 A CN 112668589A
Authority
CN
China
Prior art keywords
target
image
processed
head
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011638457.7A
Other languages
Chinese (zh)
Inventor
刘磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining Reality Wuxi Technology Co Ltd
Original Assignee
Shining Reality Wuxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining Reality Wuxi Technology Co Ltd filed Critical Shining Reality Wuxi Technology Co Ltd
Priority to CN202011638457.7A priority Critical patent/CN112668589A/en
Publication of CN112668589A publication Critical patent/CN112668589A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a pairing method and apparatus for a head-mounted electronic device, the method comprising: acquiring at least one to-be-processed image, wherein the to-be-processed image is acquired by an image acquisition device of the head-mounted electronic equipment through a display screen of target electronic equipment, the display screen is in a visual field of a user wearing the head-mounted electronic equipment, and the to-be-processed image is used for pairing between the head-mounted electronic equipment and the target electronic equipment; aiming at an image to be processed in the at least one image to be processed, carrying out target object identification on the image to be processed to obtain target characteristic information of the target object; in response to determining that the obtained target characteristic information matches pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device. Therefore, the pairing mode between the head-mounted electronic equipment and the external electronic equipment can be enriched, and the pairing is more flexible.

Description

Pairing method and apparatus for head-mounted electronic device
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to a pairing method and apparatus for a head-mounted electronic device.
Background
Currently, head-mounted electronic devices may be applied to a variety of scenarios. In a typical application scenario, the head-mounted electronic device may be paired with an external electronic device, and after the pairing is successful, the head-mounted electronic device may perform data transmission with the external electronic device.
Generally, when the head-mounted electronic device is paired with an external electronic device, the pairing may be performed by a method such as bluetooth or NFC. For example, the external electronic device may generate a two-dimensional code based on its own bluetooth information, and the head-mounted electronic device pairs with the external electronic device by recognizing the two-dimensional code. However, in practical applications, such a pairing method is relatively single, and the purpose of flexibly pairing the head-mounted electronic device and the external electronic device cannot be achieved.
Disclosure of Invention
The present disclosure provides a pairing method and apparatus for a head-mounted electronic device.
In a first aspect, a pairing method for a head-mounted electronic device is provided, including: acquiring at least one to-be-processed image, wherein the to-be-processed image is acquired by an image acquisition device of the head-mounted electronic equipment by acquiring a display screen of the target electronic equipment, the display screen is in the visual field of a user wearing the head-mounted electronic equipment, and the to-be-processed image is used for pairing between the head-mounted electronic equipment and the target electronic equipment; aiming at the image to be processed in the at least one image to be processed, carrying out target object identification on the image to be processed to obtain target characteristic information of a target object; in response to determining that the obtained target characteristic information matches the pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device.
In a second aspect, a pairing apparatus for a head-mounted electronic device is provided, including: the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring at least one image to be processed, the image to be processed is acquired by an image acquisition device of the head-mounted electronic equipment through a display screen of target electronic equipment, the display screen is in the visual field of a user wearing the head-mounted electronic equipment, and the image to be processed is used for pairing the head-mounted electronic equipment and the target electronic equipment; the identification module is used for identifying a target object of the image to be processed in the at least one image to be processed so as to obtain target characteristic information of the target object; and the pairing module is used for pairing the head-mounted electronic equipment and the target electronic equipment in response to the fact that the obtained target characteristic information is matched with the pre-stored characteristic information.
In a third aspect, an electronic device is provided, comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the method as provided in the first aspect.
In a fourth aspect, a readable storage medium is provided, on which a program or instructions are stored, which program or instructions, when executed by a processor, implement the steps of the method as provided in the first aspect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and not to limit the disclosure. In the drawings:
fig. 1 is a schematic flow diagram of a first embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 2 is a schematic flow diagram of a second embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 3 is a schematic flow chart diagram of a third embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 4 is a schematic flow chart diagram of a fourth embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 5 is a schematic flow chart diagram of a fifth embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 6 is a schematic flow chart diagram of a sixth embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 7 is a schematic flow chart diagram of a seventh embodiment of a pairing method for a head mounted electronic device according to the present application;
fig. 8 is a schematic structural diagram of an electronic device for a pairing method of a head-mounted electronic device according to the present application;
fig. 9 is a schematic structural diagram of a pairing apparatus according to a pairing method for a head-mounted electronic device of the present application.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In the present disclosure, a head-mounted electronic device may include an image capture device. The head-mounted electronic device may be a pair of smart glasses, or may be other head-mounted electronic devices (e.g., a head-mounted display device HMD) including an image capturing device, and is not limited herein. The target electronic device paired with the head-mounted electronic device may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like.
Further, the head-mounted electronic device may perform the pairing method for the head-mounted electronic device provided by the present disclosure, and at this time, the head-mounted electronic device may have functions of image processing, data processing, and data storage. Alternatively, the head-mounted electronic device cannot independently implement image processing, data storage, and the like, but may perform image processing, data processing, and data storage together with other devices, and perform the pairing method for the head-mounted electronic device provided by the present disclosure together with the other devices. For example, the head-mounted electronic device + a smart device (e.g., a smartphone, a tablet, etc.) may collectively implement image processing, data processing, or data storage functionality. Still alternatively, the head-mounted electronic device does not have the functions of image processing, data processing and data storage, but may interact with other electronic devices (e.g., a smartphone, a tablet computer, etc.) having the functions of image processing, data processing and data storage, and the electronic device executes the pairing method for the head-mounted electronic device provided by the present disclosure.
Technical solutions provided by the embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Please refer to fig. 1, which is a flowchart illustrating a pairing method for a head-mounted electronic device according to a first embodiment of the present application. In the embodiment shown in fig. 1, the pairing method for the head-mounted electronic device includes the following steps.
S102: at least one image to be processed is acquired.
Typically, a user may pair a head-mounted electronic device with a target electronic device after wearing the device. Here, pairing may be understood as an operation for connecting the head mounted electronic device to the target electronic device via wireless or wired. Alternatively, pairing may also be understood as binding the head mounted electronic device and the target electronic device. On the basis of the binding, wireless or wired communication connection can be conducted between the head-mounted electronic device and the target electronic device. The head-mounted electronic device may be augmented reality glasses or the like, and the target electronic device may be a head-mounted electronic device, a mobile terminal or the like.
The target electronic device to be paired may include a display screen, and an image for pairing may be displayed in the display screen. When the user wears the head-mounted electronic equipment, the user can observe the real environment through the display device of the head-mounted electronic equipment. Therefore, when the display screen of the target electronic device appears in the field of view of the user (i.e., in a case where the display screen of the target electronic device is in the field of view of the user wearing the head-mounted electronic device), the executing body (e.g., the head-mounted electronic device) of the pairing method for the head-mounted electronic device may control the image capturing device in the head-mounted electronic device to capture an image of the display screen of the target electronic device, thereby obtaining at least one to-be-processed image. It should be noted that the executing body may set the number of acquired images to be processed according to requirements.
It is understood that the display screen of the target electronic device may be a real-time display image. Alternatively, when the target electronic device displays an image in the display screen, different images may be displayed in the same region of the display screen. Alternatively, different images may be displayed in different regions of the display screen in different time divisions, which is not specifically limited herein.
S104: and aiming at the image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed so as to obtain target characteristic information of the target object.
In this embodiment, after obtaining at least one to-be-processed image, the execution subject may perform target object recognition on any one of the to-be-processed images. After the target object is identified, the target characteristic information of the target object can be obtained.
It is understood that the target object may be a graphic, a text, or the like. In the target electronic device, the display screen may continuously display the target object in a fixed area. Alternatively, in the target electronic device, the display screen may display different/same target objects at fixed area time division. Alternatively, in the target electronic device, the display screen may time-divisionally display the target object in different regions. Further, for any image to be processed, one or more target objects may be included in the image to be processed. If the image to be processed includes a target object, the target feature information obtained by performing the subject recognition may be feature information of the target object. If the image to be processed includes a plurality of target objects, the target feature information obtained by performing the subject recognition may be feature information of the plurality of target objects. The target feature information may include at least one of shape feature information, color feature information, position feature information, and the like of the target object, and may characterize an attribute of the target object.
In general, target object recognition may be understood as adding and outlining objects in an image using an algorithm (either a conventional algorithm or a deep learning based algorithm) and then identifying what the objects are at the outline. Therefore, the target object recognition can be divided into two subtasks of target detection and target recognition. Specifically, a certain block of an image can be searched for a target object in the image through visual perception; the target identification is then similar to the target classification and can be used to decide the class of the target object obtained from the currently found image block.
S106: in response to determining that the obtained target characteristic information matches the pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device.
In this embodiment, the execution main body may store feature information in advance, and the feature information may be used for matching with the target feature information. After the target characteristic information of the target object is identified and obtained, the execution main body may match the target characteristic information with pre-stored characteristic information, and obtain a matching result. If it is determined that the target characteristic information matches the pre-stored characteristic information, the head-mounted electronic device may be paired with the target electronic device.
It is understood that the executing entity may directly end the pairing if it is determined that the obtained target characteristic information does not match the pre-stored characteristic information. Or, if it is determined that the obtained target feature information does not match the pre-stored feature information, the executing main body may further re-execute the pairing method of steps 102 to 106.
The target characteristic information is matched with the pre-stored characteristic information, and the target characteristic information of the target object and the pre-stored characteristic information can be understood to meet a preset specific relationship. For example, when the execution main body prestores the feature information, the execution main body can simultaneously establish a mapping relationship between the feature information and different target feature information. If the pre-stored characteristic information has characteristic information mapped with the target characteristic information, the characteristic target characteristic information can be represented to be matched with the characteristic information. At this time, the head-mounted electronic device may be paired with the target electronic device.
In some optional implementation manners, if the execution main body obtains an image to be processed, and the target feature information of the image to be processed is matched with the pre-stored feature information, the head-mounted electronic device may be paired with the target electronic device. If the execution main body acquires a plurality of images to be processed, and for each image to be processed, the target feature information of the image to be processed, which meets the preset condition, is matched with the pre-stored feature information, the head-mounted electronic device and the target electronic device can be paired. As an example, in the case that it is determined that the target feature information in two consecutive images to be processed matches the pre-stored feature information, the executing body may determine that the target electronic device is pairable with the head-mounted electronic device.
In the technical solution provided by the embodiment shown in fig. 1, when the head-mounted electronic device is paired with an external target electronic device, an image capture device in the head-mounted electronic device may be controlled to capture an image of a display screen of the target electronic device, so as to obtain at least one to-be-processed image for pairing. By identifying the target object for at least one image to be processed, the target characteristic information of the target object can be obtained. In the event that it is determined that the target characteristic information matches pre-stored characteristic information, the head-mounted electronic device and the target electronic device may be paired. Therefore, the head-mounted electronic equipment and the external electronic equipment are paired in a mode of matching the target characteristic information of the target object in at least one image to be processed with the pre-stored characteristic information, so that the pairing mode between the head-mounted electronic equipment and the external electronic equipment can be enriched, and the pairing is more flexible.
In some optional implementation manners of the embodiment shown in fig. 1, acquiring at least one to-be-processed image may specifically include:
firstly, an image acquisition device of the head-mounted electronic equipment is controlled to acquire an image of a display screen of the target electronic equipment according to a preset frequency. The preset frequency can be set according to actual conditions.
It is understood that, if the display screen of the target electronic device is a target object that is switched and displayed according to a certain change frequency (i.e. the target object is dynamically changed), the preset frequency may be determined based on the change frequency. For example, the preset frequency may be equal to the variation frequency. Alternatively, the preset frequency may be an integer multiple of the variation frequency. It should be noted that, in the case that the change frequency changes, the preset frequency also changes. At this time, the execution body may recognize the target object by means of target tracking or the like.
And secondly, controlling the image acquisition device to stop image acquisition and acquiring at least one image to be processed from the acquired plurality of images to be processed under the condition that the number of the acquired images to be processed is not less than the preset number.
It can be understood that when the image acquisition device is controlled to perform image acquisition according to the set frequency, one image to be processed can be obtained every time image acquisition is performed. Therefore, after a period of time, a plurality of images to be processed can be acquired in an accumulated mode. When it is determined that the number of the plurality of images to be processed is not less than the preset number, it may be considered that a sufficient number of images to be processed have been acquired. At this time, the image pickup device may be controlled to stop image pickup. The preset number can be set according to actual needs.
After acquiring a plurality of images to be processed, the execution main body can directly acquire the plurality of images to be processed as at least one image to be processed. Alternatively, the execution main body may select a plurality of images to be processed from the plurality of images to be processed as the at least one image to be processed. For example, a plurality of images to be processed with higher definition can be selected as at least one image to be processed. Alternatively, a plurality of to-be-processed images acquired in a certain time region may be selected as at least one to-be-processed image according to the time sequence of image acquisition. The specific selection method is not particularly limited.
The display screen of the target electronic device may display an image including one or more target objects. In the process of controlling an image acquisition device in the head-mounted electronic equipment to acquire images, the target object and the display area where the target object is located are variable. In this way, in at least one image to be processed acquired by the execution main body, the target object may be different, and the area where the target object is located in the display screen may also be different.
In this implementation manner, since the image capturing device can capture an image of the display screen of the target electronic device according to the set frequency, the executing body can flexibly select one or more images from the captured images as at least one to-be-processed image, so as to facilitate effective pairing of the head-mounted electronic device and the target electronic device based on the at least one to-be-processed image.
In some optional implementations of the embodiment shown in fig. 1, before the at least one to-be-processed image is acquired by the method, the executing body may further determine whether the head-mounted electronic device triggers a preset pairing condition. And if the preset matching condition is determined to be triggered, executing the step of acquiring at least one image to be processed. Otherwise, if it is determined that the preset pairing condition is not triggered, the step of acquiring the at least one to-be-processed image may not be executed. In this implementation manner, the executing body executes the pairing method disclosed in the present application only when the head-mounted electronic device triggers the preset pairing condition, so that resource consumption of the device can be reduced.
In some optional implementations of the embodiment shown in fig. 1, the pairing condition may include at least one of the following: the head-mounted electronic equipment and the target electronic equipment are connected to the same wireless fidelity WiFi; the head-mounted electronic equipment and the target electronic equipment are provided with communication service by the same base station; the head-mounted electronic equipment and the target electronic equipment are positioned in the same designated area; the head-mounted electronic device and the target electronic device are in the same audio environment. In this embodiment, as long as the head-mounted electronic device and the target electronic device satisfy any one or more of the pairing conditions, it may be determined that the head-mounted electronic device triggers a preset pairing condition. In this implementation manner, the execution main body may use any one or more of the above situations as a preset pairing condition, so that pairing between the head-mounted electronic device and the target electronic device may be applicable to multiple scenarios, and further, whether the pairing condition is triggered by the head-mounted electronic device is effectively determined.
Optionally, in a case where it is determined that the head-mounted electronic device triggers the preset pairing condition, a prompt message may also be generated. The prompt message may be used to prompt the user that the head-mounted electronic device and the target electronic device meet the pairing condition. Meanwhile, the user receiving the prompt message can represent that the head-mounted electronic equipment is about to be paired with the target electronic equipment. The user can be a user wearing the head-mounted electronic device, and the prompt message can be voice message, text message, video message and the like.
If the execution main body receives the equipment pairing instruction within the preset time length, the execution main body can enter a pairing mode to acquire at least one to-be-processed image. After receiving the prompt message, the user can input a device pairing instruction within a preset time length if the user confirms to pair. If the user confirms that pairing is not to be performed, the device pairing instruction may not be input. In this way, if the device pairing instruction of the user is received within the preset time period, the step of acquiring at least one to-be-processed image can be executed. On the contrary, if the device pairing instruction of the user is not received within the preset time length, the step of acquiring at least one to-be-processed image may not be executed.
In the implementation mode, the user can be prompted through the prompt information, and the step of obtaining at least one to-be-processed image is executed when the pairing instruction is received, so that the controllability of the user on equipment pairing can be improved, and meaningless equipment pairing is avoided.
For ease of understanding, refer to FIG. 2. Fig. 2 is a flowchart illustrating a pairing method for a head-mounted electronic device according to a second embodiment of the present application, which includes the following steps.
S201: determining whether the head-mounted electronic device triggers a preset pairing condition.
If the preset pairing condition is triggered, S202 is executed. Otherwise, S201 may be executed in a loop.
S202: and generating prompt information in response to determining that the head-mounted electronic equipment triggers the preset pairing condition.
The prompt message is used for prompting the user that the head-mounted electronic device and the target electronic device meet the pairing condition.
S203: and controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of a display screen of the target electronic equipment in response to receiving the equipment pairing instruction within a preset time length.
The display screen is within a field of view of a user wearing the head-mounted electronic device. When image acquisition is performed, image acquisition can be performed on the display screen of the target electronic device according to a preset frequency. The preset frequency may be determined based on a change frequency of the target object in the display screen of the target electronic device.
S204: and acquiring at least one image to be processed from the acquired multiple images.
At least one image to be processed is used for pairing between the head-mounted electronic device and the target electronic device.
S205: and aiming at the image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed so as to obtain target characteristic information of the target object.
S206: in response to determining that the obtained target characteristic information matches the pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device.
In a first alternative implementation manner of the embodiment shown in fig. 1, the target feature information may include graphic feature information. The graphical feature information may be used to characterize the shape of the target object. The step S104 may be specifically implemented by the following steps: preprocessing an image to be processed; carrying out graph detection on the preprocessed image to be processed, and determining a graph area containing a graph in the image to be processed; carrying out pattern recognition and feature extraction on the pattern area to obtain pattern feature information of the pattern; and determining the graphic characteristic information as target characteristic information. The preprocessing may be an adjustment of the size and the size of an image, an adjustment of the definition of an image, or the like, and is not particularly limited herein. In this implementation manner, the executing body may perform object recognition on the image to be processed, so as to determine a graph corresponding to the recognition result in the image to be processed. The graph can be understood as a target object, and edge contour extraction and the like are carried out on the graph, so that graph characteristic information of the target object can be obtained.
Optionally, the process of performing the pattern detection on the image to be processed and performing the pattern recognition on the detected pattern region may also be implemented in other manners. For example, a pre-trained pattern recognition model may be used for pattern detection and recognition. That is, the image to be processed is input to the pattern recognition model, so that the target feature information can be obtained. The pattern recognition model can be used for representing the corresponding relation between the input image and the output target characteristic information. Here, the pattern recognition model may be trained in a deep learning manner. The graphic feature information may specifically include the number of target objects contained in the image to be processed, the color of the target objects, the shape and size of the target objects, and the like.
In the case where the target feature information includes graphic feature information, accordingly, the pre-stored feature information may include pre-stored graphic feature information. Thus, the step S106 may specifically include: determining the similarity between the graphic characteristic information and pre-stored graphic characteristic information; in response to determining that the similarity is greater than or equal to the first threshold, determining that the graphical feature information matches pre-stored graphical feature information; pairing the head-mounted electronic device with the target electronic device.
The above-mentioned graphic feature information may specifically include the number of target objects, the color of the target objects, the shape and size of the target objects, and the like, and accordingly, the pre-stored graphic feature information may include the number of target objects, the color of the target objects, the shape and size of the target objects, and the like. In determining the similarity between the target feature information and the pre-stored graphic feature information, the similarity may be achieved in at least one of the following ways.
Similarity between the number of target objects in the graphic feature information and the number of target objects in the pre-stored graphic feature information is determined. The similarity here may be understood as whether the number of target objects is the same, and if the difference in the number of target objects is less than or equal to a set number, it may be determined that the similarity is greater than or equal to the first threshold.
Similarity between the color of the target object in the graphic feature information and the color of the target object in the pre-stored graphic feature information is determined. The similarity here can be understood as whether the color of the target object in the graphic feature information includes the color of the target object in the pre-stored graphic feature information. If so, it may be determined that the similarity is greater than or equal to the first threshold.
A similarity between the shape of the target object indicated by the graphic feature information and the shape of the target object indicated by the pre-stored graphic feature information is determined.
And determining the similarity between the size of the target object in the graphic characteristic information and the size of the target object in the pre-stored graphic characteristic information. The similarity here may be understood as whether the sizes of the target objects are the same, and if the difference in the sizes of the target objects is less than or equal to a set value, it may be determined that the similarity is greater than or equal to the first threshold.
As an example, if any one or more of the above four similarity degrees are greater than or equal to the first threshold, it may be determined that the target feature information matches the pre-stored feature information. In the event of a match, the head-mounted electronic device may be paired with the target electronic device.
Optionally, the step S106 may further include: it is determined whether the graphic characteristic information is complementary to pre-stored graphic characteristic information. If complementary, it may be determined that the graphical feature information matches pre-stored graphical feature information. At this time, the head-mounted electronic device may be paired with the target electronic device. The graphic feature information may specifically include the number of target objects, the color of the target objects, the shape and size of the target objects, and the like, where the complementation between the graphic feature information and the pre-stored graphic feature information may be that the sum of the number of target objects is equal to a set number. Alternatively, it may be that the sum of the color types of the target object is equal to the set type. Alternatively, the shape of the graph obtained by stitching the target objects may be a set shape. Alternatively, the target objects may be connected to each other to form a graphic.
It is to be understood that the number of target objects, the color of the target object, the shape and the size of the target object included in the above-described graphic feature information are merely illustrated as examples. Besides, the graphic feature information may also be other feature information of the graphic, which is not illustrated here. The method comprises the steps of selecting a preset graphic feature information of a target electronic device, and matching the target electronic device with the head-mounted electronic device under the condition that the similarity between the graphic feature information of the graphic and the prestored graphic feature information is greater than or equal to a first threshold value. Further, the pairing method provided by the embodiment enables the user to perform non-inductive pairing with the target electronic device when the user wears the head-mounted electronic device.
For ease of understanding, please refer to fig. 3. Fig. 3 is a flowchart illustrating a pairing method for a head-mounted electronic device according to a third embodiment of the present application, which includes the following steps.
S301: at least one image to be processed is acquired.
In some alternative implementations, specific implementations of acquiring at least one to-be-processed image may refer to specific implementations of corresponding steps in the embodiment shown in fig. 2, and will not be described in detail here.
S302: and preprocessing the image to be processed in the at least one image to be processed.
S303: and carrying out graph detection on the preprocessed image to be processed, and determining a graph area comprising a graph in the image to be processed.
S304: and carrying out pattern recognition and feature extraction on the pattern area to obtain pattern feature information of the pattern.
S305: and determining the similarity between the graphic characteristic information and the pre-stored graphic characteristic information.
S306: in response to determining that the similarity is greater than or equal to the first threshold, determining that the graphical feature information matches pre-stored graphical feature information.
S307: pairing the head-mounted electronic device with the target electronic device.
In a second alternative implementation manner of the embodiment shown in fig. 1, the target feature information may include text feature information. The text feature information is used to characterize text contained in the target object. The step S104 may be specifically implemented by the following steps: preprocessing an image to be processed; performing character detection on the preprocessed image to be processed, and determining a character area containing character information in the image to be processed; performing text recognition and feature extraction on the character area to obtain text feature information of the text; and determining the text characteristic information as target characteristic information. The preprocessing may be an adjustment of the size and the size of an image, an adjustment of the definition of an image, or the like, and is not particularly limited herein.
When performing Character detection and text Recognition on an image to be processed, the method may be specifically implemented based on an OCR (Optical Character Recognition) technology. Alternatively, the method may be implemented based on other computer vision algorithms, which are not specifically limited herein.
After the text in the image to be processed is identified, feature extraction is performed on the text, so that text feature information can be obtained. The text characteristic information may include one or more of a type of text (e.g., a character string, a word, etc.), a word included in the text, a font of the text, a color of the text, and the like.
In this implementation manner, the execution main body may perform character recognition on the image to be processed, so as to detect an area where a character is located in the image to be processed, and perform text recognition and feature extraction on the character area, thereby obtaining text feature information of the character in the target object.
In the case where the target feature information includes text feature information, accordingly, the pre-stored feature information may include pre-stored text feature information. Thus, step S106 in the embodiment shown in fig. 1 may specifically include: determining the similarity between the text characteristic information and pre-stored text characteristic information; in response to determining that the similarity is greater than or equal to the second threshold, determining that the text feature information matches pre-stored text feature information; pairing the head-mounted electronic device with the target electronic device.
The text characteristic information may specifically be a text type, a character included in the text, a font of the text, a color of the text, and the like. Accordingly, the pre-stored text characteristic information may be a text type, a character included in the text, a font of the text, a color of the text, and the like. In determining the similarity between the target feature information and the pre-stored graphic feature information, the similarity may be achieved in at least one of the following ways.
And determining the similarity between the text type in the text characteristic information and the text type in the pre-stored text characteristic information. The similarity here can be understood as whether the text type in the text feature information includes a text type in pre-stored text feature information. If so, it may be determined that the similarity is greater than or equal to the second threshold.
And determining the similarity between the characters in the text characteristic information and the characters in the pre-stored text characteristic information. The similarity here can be understood as whether the characters in the text feature information are the same as the characters in the pre-stored text feature information. If the difference between the words contained in the two texts is smaller than a certain threshold, it may be determined that the similarity is greater than or equal to a second threshold.
And determining the similarity between the font of the characters in the text characteristic information and the font of the characters in the pre-stored text characteristic information. The similarity here can be understood as whether the font of the characters in the text characteristic information is the same as the font of the characters in the pre-stored text characteristic information. If so, it may be determined that the similarity is greater than or equal to the second threshold.
And determining the similarity between the text color in the text characteristic information and the text color in the pre-stored text characteristic information. The similarity here can be understood as whether the text color in the text feature information is included in the text color in the pre-stored text feature information. If so, it may be determined that the similarity is greater than or equal to the second threshold.
As an example, if any one or more of the above four similarity degrees are greater than or equal to the second threshold, it may be determined that the target feature information matches the pre-stored feature information. In the event of a match, the head-mounted electronic device may be paired with the target electronic device.
It is understood that the text type, the text included in the text characteristic information, the font of the text, and the color of the text are only described as examples, and besides, the text characteristic information may also be other characteristic information of the text, which is not illustrated here.
The text displayed in the display screen of the target electronic equipment can be used as the pairing condition of the head-mounted electronic equipment and the target electronic equipment, and the head-mounted electronic equipment and the target electronic equipment are paired only when the similarity between the text characteristic information of the text and the pre-stored text characteristic information is greater than or equal to the second threshold value, so that the head-mounted electronic equipment and the target electronic equipment can be flexibly paired on one hand, and the pairing safety can be improved on the other hand. Further, the pairing method provided by the embodiment enables the user to realize non-inductive pairing with the target electronic device when wearing the head-mounted electronic device.
For ease of understanding, refer to FIG. 4. Fig. 4 is a flowchart illustrating a pairing method for a head-mounted electronic device according to a fourth embodiment of the present application, which includes the following steps.
S401: at least one image to be processed is acquired.
In some alternative implementations, specific implementations of acquiring at least one to-be-processed image may refer to specific implementations of corresponding steps in the embodiment shown in fig. 2, and will not be described in detail here.
S402: and preprocessing the image to be processed in the at least one image to be processed.
S403: and performing character detection on the preprocessed image to be processed, and determining a character area comprising character information in the image to be processed.
S404: and performing text recognition and feature extraction on the character area to obtain text feature information of the text.
S405: and determining the similarity between the text characteristic information and the pre-stored text characteristic information.
S406: in response to determining that the similarity is greater than or equal to the second threshold, determining that the text feature information matches pre-stored text feature information.
S407: pairing the head-mounted electronic device with the target electronic device.
In a third alternative implementation manner of the embodiment shown in fig. 1, the target feature information may include graphic feature information and text feature information. The graphical feature information may be used to characterize the shape of the target object. The text feature information may be used to characterize text contained in the target object. The step S104 may be specifically implemented by the following steps: preprocessing an image to be processed; carrying out graph detection and character detection on the preprocessed image to be processed, and determining a graph area containing graphs and a character area containing character information in the image to be processed; carrying out pattern recognition and feature extraction on the pattern area to obtain pattern feature information of the pattern, and carrying out text recognition and feature extraction on the character area to obtain text feature information of the text; and determining the graphic characteristic information and the text characteristic information as target characteristic information.
Specific implementation of the above steps can refer to specific implementation of corresponding steps in the first optional implementation manner and the second optional implementation manner, and description is not repeated here.
In the case where the target feature information includes graphic feature information and text feature information, the pre-stored feature information may include pre-stored graphic feature information and text feature information, accordingly. Thus, step S106 in fig. 1 may include: determining a first similarity between the graphic characteristic information and pre-stored graphic characteristic information, and determining a second similarity between the text characteristic information and pre-stored text characteristic information; in response to determining that the first similarity is greater than or equal to a first threshold and the second similarity is greater than or equal to a second threshold, determining that the target feature information matches pre-stored feature information; pairing the head-mounted electronic device with the target electronic device.
Specific implementation of the above steps can refer to specific implementation of corresponding steps in the first optional implementation manner and the second optional implementation manner, and detailed description is omitted here.
In this implementation manner, since the graphics and the text displayed on the display screen of the target electronic device can be used as pairing conditions of the head-mounted electronic device and the target electronic device, and the head-mounted electronic device and the target electronic device are paired only when the similarity between the graphic feature information of the graphics and the pre-stored graphic feature information is greater than or equal to the first threshold value and the similarity between the text feature information of the text and the pre-stored text feature information is greater than or equal to the second threshold value, the pairing security can be further improved on the basis of flexibly pairing the head-mounted electronic device and the target electronic device. Further, the pairing method provided by the embodiment enables the user to realize non-inductive pairing with the target electronic device when wearing the head-mounted electronic device.
For ease of understanding, please refer to fig. 5. Fig. 5 is a flowchart of a fifth embodiment of a pairing method for a head-mounted electronic device according to the present application, which specifically includes the following steps.
S501: at least one image to be processed is acquired.
In some alternative implementations, specific implementations of acquiring at least one to-be-processed image may refer to specific implementations of corresponding steps in the embodiment shown in fig. 2, and will not be described in detail here.
S502: and preprocessing the image to be processed in the at least one image to be processed.
S503: and carrying out graphic detection and character detection on the preprocessed image to be processed, and determining a graphic area comprising graphics and a character area comprising character information in the image to be processed.
S504: and carrying out image recognition and feature extraction on the image area to obtain image feature information of the image, and carrying out text recognition and feature extraction on the character area to obtain text feature information of the text.
S505: a first similarity between the graphic feature information and the pre-stored graphic feature information and a second similarity between the text feature information and the pre-stored text feature information are determined.
S506: in response to determining that the first similarity is greater than or equal to a first threshold and the second similarity is greater than or equal to a second threshold, determining that the graphical feature information matches pre-stored graphical feature information and that the textual feature information matches pre-stored textual feature information.
S507: pairing the head-mounted electronic device with the target electronic device.
In a fourth alternative implementation of the embodiment shown in fig. 1, the target feature information extracted from the image to be processed may include position information that the target object is in a field of view of the head-mounted electronic device and/or graphic feature information of the target object. Step S104 in fig. 1 may specifically include: carrying out target object identification on an image to be processed; performing feature extraction on the identified result to obtain position feature information and/or graphic feature information of the target object, wherein the position feature information of the target object is used for representing the target position of the target object in the visual field of the head-mounted electronic equipment; and determining the obtained position characteristic information and/or the obtained graphic characteristic information as target characteristic information.
In the above steps, the specific implementation of identifying the target object for the image to be processed may be to perform pattern identification for the image to be processed, and reference may be made to the specific implementation of performing pattern identification for the image to be processed in the first optional implementation, which is not described in detail here.
In this implementation, the executing subject may determine a target position of a target object in the image to be processed in the field of view of the head-mounted electronic device, and/or extract a graphical feature of the target object. After feature extraction, the position feature information and/or the graphic feature information of the target object can be obtained. The position characteristic information can represent a target position of the target object in a visual field of the head-mounted electronic device. The graphic feature information may be the number of target objects, the color of the target objects, the shape and size of the target objects, and the like. After obtaining the position feature information and/or the graphic feature information, the execution subject may further determine the position feature information and/or the graphic feature information as the target feature information. When the feature extraction is carried out on the image to be processed, the target object in the image to be processed is firstly identified, and then the feature extraction is carried out on the target object to obtain the position feature information and/or the graphic feature information of the target object, so as to be used as the target feature information. Therefore, on one hand, the target characteristic information of the target object can be effectively extracted, on the other hand, the position characteristic information and/or the graphic characteristic information are/is used as the target characteristic information, and the content of the target characteristic information can be enriched. When the head-mounted electronic equipment and the target electronic equipment are subsequently paired based on the target characteristic information, the pairing conditions can be diversified, and the pairing flexibility is improved. Further, the pairing method provided by the embodiment enables the user to realize non-inductive pairing with the target electronic device when wearing the head-mounted electronic device.
It should be noted that, in the case that the target feature information includes the position feature information, for convenience of subsequently pairing the head-mounted electronic device and the target electronic device based on the position feature information, the step S102 may be specifically implemented by the following steps.
First, the spatial position of the head mounted electronic device can be detected in real time. And when the spatial position of the head-mounted electronic equipment is determined to be changed based on the detection result, controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of a display screen of the target electronic equipment.
In this implementation, when the user wears the head-mounted electronic device, the spatial position of the head-mounted electronic device can be changed by moving the head, so as to change the target position of the target object in the view of the head-mounted electronic device. When the user moves the head, the moving direction can be any direction such as left, right, up and down, and the like, as long as the target electronic equipment is ensured to be positioned in the visual field of the head-mounted electronic equipment. Therefore, when the characteristic recognition is carried out on the basis of the acquired to-be-processed image, the position characteristic information of the target position representing that the target object is positioned in the visual field of the head-mounted electronic equipment can be recognized and obtained. Therefore, during the process of controlling the image acquisition device of the head-mounted electronic device to acquire images, the user can also move the head for multiple times, for example, the user can control the distance between the preset position in the field of view of the head-mounted electronic device and the image to be processed in the field of view to be gradually reduced. Therefore, a plurality of images to be processed can be acquired, and a plurality of pieces of position characteristic information representing different target positions of the target object in the visual field of the head-mounted electronic device can be acquired based on the plurality of images to be processed, so that the head-mounted electronic device and the target electronic device can be successfully paired based on the plurality of pieces of position characteristic information.
Optionally, when the image capturing device is controlled to capture an image, the image capturing device may capture an image of a display screen of the target electronic device according to a preset frequency, and specific implementation manners may refer to specific implementations of the above-mentioned corresponding steps, which are not described in detail here.
And secondly, acquiring an image acquired by the image acquisition device as at least one image to be processed.
In this implementation, the image capturing device may capture one or more images. Here, the one or more images may be directly taken as the at least one to-be-processed image. Alternatively, at least one image can be selected from the images as the at least one image to be processed.
Since the step of acquiring the at least one to-be-processed image is performed when the spatial position of the head-mounted electronic device changes, the position feature information representing the target position of the target object in the view of the head-mounted electronic device can be obtained based on the acquired at least one to-be-processed image, so that the pairing can be performed subsequently based on the position feature information.
In a more specific implementation, in the case that the target characteristic information includes the above-mentioned location characteristic information, accordingly, the pre-stored characteristic information includes pre-stored location characteristic information. The pre-stored location characterization information may be used to characterize a preset location in a field of view of the head-mounted electronic device. The step S106 can be specifically realized by the following steps: determining the distance between the target position represented by the target position characteristic information and the preset position represented by the position characteristic information based on the position characteristic information of the target object and the pre-stored position characteristic information; determining that the target characteristic information is matched with the pre-stored characteristic information in response to the determined distance being smaller than the preset distance; pairing the head-mounted electronic device with the target electronic device.
That is, if the target object in the at least one image to be processed is located within a preset spatial range of a preset position within the field of view of the head-mounted electronic device, then the head-mounted electronic device may be paired with the target electronic device. In practical applications, in order to facilitate pairing the head-mounted electronic device with the target electronic device, the user may rotate the head while wearing the head-mounted electronic device until a target object in a display screen of the target electronic device, which is seen by the user through the head-mounted electronic device, matches a preset position in a visual field of the head-mounted electronic device (for example, a target position of the target object overlaps the preset position), and pair the head-mounted electronic device with the target electronic device.
Optionally, in order to further improve the pairing security, in the case that it is determined that the distance between the target location represented by the target location characteristic information and the preset location represented by the location characteristic information is smaller than the preset distance, it may be further determined whether the duration of the state in which the distance is smaller than the preset distance is greater than or equal to the preset duration. If yes, the target feature information and the pre-stored feature information can be determined to be matched, and the head-mounted electronic device and the target electronic device are paired. If not, it can be determined that the target characteristic information does not match the pre-stored characteristic information, and the head-mounted electronic device is not paired with the target electronic device. This can further improve the security of pairing. The preset time period may be set according to an actual situation, and is not specifically limited herein. Optionally, in the process of rotating the head by the user, the execution main body may further prompt the user to stop rotating the head when determining that the distance between the target position represented by the target position characteristic information and the preset position represented by the position characteristic information is smaller than the preset distance. The execution main body may pair the head-mounted electronic device with the target electronic device after the user stops rotating the head for a preset time period. This can achieve the object of further improving pairing security.
The target position of the target object in the display screen of the target electronic equipment in the visual field of the head-mounted electronic equipment can be used as the pairing condition of the head-mounted electronic equipment and the target electronic equipment, and the head-mounted electronic equipment and the target electronic equipment are paired only under the condition that the distance between the target position and the preset position is smaller than the preset distance, so that the head-mounted electronic equipment and the target electronic equipment can be paired flexibly on one hand, and the pairing safety can be improved on the other hand.
For ease of understanding, refer to FIG. 6. Fig. 6 is a flowchart illustrating a pairing method for a head-mounted electronic device according to a sixth embodiment of the present application, which includes the following steps.
S601: whether the spatial position of the head-mounted electronic equipment changes or not is detected.
S602: and controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of the display screen of the target electronic equipment in response to determining that the spatial position of the head-mounted electronic equipment changes.
S603: and acquiring an image acquired by the image acquisition device as at least one image to be processed.
S604: and aiming at the image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed.
S605: and extracting the identified result features to obtain target position feature information of the target object, wherein the target position feature information is used for indicating the target position of the target object in the visual field of the head-mounted electronic equipment.
S606: and determining the distance between the target position and the preset position based on the target position characteristic information of the target object and the prestored position characteristic information of the visual field of the head-mounted electronic equipment.
The pre-stored characteristic information includes pre-stored positional characteristic information of a field of view of the head-mounted electronic device. The pre-stored position characteristic information of the visual field of the head-mounted electronic equipment is used for representing the preset position of the visual field of the head-mounted electronic equipment.
S607: and determining that the target position characteristic information is matched with the pre-stored position characteristic information in response to the determined distance being smaller than the preset distance.
S608: pairing the head-mounted electronic device with the target electronic device.
In another more specific implementation manner, in the case that the target feature information of the target object includes the above-mentioned position feature information, the target feature information may further include graphic feature information of the target object. Thus, when determining whether the target characteristic information is matched with the pre-stored characteristic information, the target characteristic information and the pre-stored characteristic information can be jointly determined by combining the graphic characteristic information of the target object.
In this implementation, before determining that the target characteristic information matches the pre-stored characteristic information in response to determining that the distance between the target position and the preset position is smaller than the preset distance, the similarity between the graphic characteristic information of the target object and the pre-stored graphic characteristic information may also be determined. For a specific implementation manner, reference may be made to a specific implementation manner of determining a similarity between the graph feature information and the pre-stored graph feature information in the first optional implementation manner, and a description thereof is not repeated here. If the obtained similarity is larger than or equal to the third threshold, it can be determined that the graphic feature information of the target object matches with the pre-stored graphic feature information. The third threshold may be the same as the first threshold described above, or may be set according to actual circumstances. When the matching of the graphic characteristic information of the target object and the pre-stored graphic characteristic information is determined, the matching of the target characteristic information and the pre-stored characteristic information can be determined by combining the fact that the distance between the target position and the preset position determined before is smaller than the preset distance. At this time, the head-mounted electronic device may be paired with the target electronic device.
In this implementation manner, the target position and the pattern of the target object in the target electronic device are both matched with the preset feature information, so that it can be determined that the target electronic device and the head-mounted electronic device can be paired. It is understood that the matching of the graph of the target object in the target electronic device with the pre-stored characteristic information may be that the graph of the target object is the same as or similar to the pre-stored graph. Or the matching of the graph of the target object in the target electronic device with the pre-stored characteristic information may be that the graph of the target object is complementary to the pre-stored graph. In order to facilitate the user to intuitively determine whether the target electronic device is matched with the head-mounted electronic device, the head-mounted electronic device may display the to-be-matched image in the field of view of the user. The pattern to be matched may be the same, similar or complementary pattern to the target object.
In practical applications, a user may rotate the head while wearing the head-mounted electronic device until a target object in a display screen of the target electronic device, which is seen by the user through the head-mounted electronic device, coincides with a preset position in a visual field of the head-mounted electronic device. And then the execution main body can judge whether the target position and the graph of the target object are matched with the preset position and the graph which are prestored. If so, the target electronic device and the head-mounted electronic device may be paired.
Optionally, in order to further improve pairing security, when a target object in a display screen of the target electronic device viewed through the head-mounted electronic device coincides with a preset position in a visual field of the head-mounted electronic device, the user may stop rotating the head and maintain the current head state unchanged. After the user maintains the preset duration of the head state unchanged, the execution main body may determine whether the target position and the pattern of the target object are matched with the preset position and the pattern that are pre-stored. If so, the target electronic device and the head-mounted electronic device may be paired.
It can be understood that, the executing body may further determine whether the position characteristic information of the target object matches with the pre-stored position characteristic information after determining that the graphic characteristic information of the target object matches with the pre-stored image characteristic information, and there is no unique limitation here.
In the foregoing implementation manner, the target position of the target object in the visual field of the head-mounted electronic device in the display screen of the target electronic device and the graphical feature information of the target object may be jointly used as a pairing condition of the head-mounted electronic device and the target electronic device, and the head-mounted electronic device and the target electronic device may be paired only when the distance between the target position and the preset position is smaller than the preset distance and the similarity between the graphical feature information of the target object and the prestored graphical feature information is greater than or equal to the third threshold value, so that on one hand, the head-mounted electronic device and the target electronic device may be paired flexibly, and on the other hand, the pairing safety may be further improved.
For ease of understanding, please refer to fig. 7. Fig. 7 is a flowchart illustrating a pairing method for a head-mounted electronic device according to a seventh embodiment of the present application, which specifically includes the following steps.
S701: whether the spatial position of the head-mounted electronic equipment changes or not is detected.
S702: and controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of the display screen of the target electronic equipment in response to determining that the spatial position of the head-mounted electronic equipment changes.
S703: and acquiring an image acquired by the image acquisition device as at least one image to be processed.
S704: and aiming at the image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed.
S705: and extracting the identified result features to obtain target position feature information and graphic feature information of the target object, wherein the target position feature information is used for indicating the target position of the target object in the visual field of the head-mounted electronic equipment.
S706: determining a distance between the target position and a preset position and determining a similarity between the graphic feature information of the target object and the pre-stored graphic feature information based on the target position feature information of the target object and the pre-stored position feature information of the field of view of the head-mounted electronic device.
The pre-stored characteristic information includes pre-stored positional characteristic information of a field of view of the head-mounted electronic device. The pre-stored position characteristic information of the visual field of the head-mounted electronic equipment is used for indicating the preset position of the visual field of the head-mounted electronic equipment.
S707: and in response to the fact that the determined distance is smaller than the preset distance and the similarity is larger than or equal to a third threshold value, determining that the target position characteristic information is matched with the pre-stored position characteristic information and the graph characteristic information of the target object is matched with the pre-stored graph characteristic information.
S708: pairing the head-mounted electronic device with the target electronic device.
It should be noted that, in any of the embodiments shown in fig. 1 to fig. 7, after performing target recognition on at least one image to be processed, the number of target objects obtained by recognition may be one or more. If the number of the target objects is multiple, for any of the above implementation manners, the head-mounted electronic device may be paired with the target electronic device only when it is determined that the feature information of each target object matches the pre-stored feature information. This ensures the security of pairing.
For example, if the acquired to-be-processed image is one image and the one image includes a plurality of target objects, the head-mounted electronic device and the target electronic device may be paired when it is determined that the target feature information of the plurality of target objects is all matched with the pre-stored feature information.
For another example, the obtained images to be processed are multiple, the multiple images to be processed are obtained through multiple image acquisition, and each image to be processed includes one or more target objects, so that the head-mounted electronic device and the target electronic device can be paired when it is determined that the target feature information of the target object in each image to be processed is all matched with the pre-stored feature information.
Optionally, after the head-mounted electronic device is paired with the target electronic device based on any of the above optional implementations, a data connection between the head-mounted electronic device and the target electronic device may be established, so as to facilitate data transmission between the head-mounted electronic device and the target electronic device. The data connection mode may include at least one of a wireless connection, a wired connection, a bluetooth connection, and an NFC (Near Field Communication) connection.
Because the head-mounted electronic equipment and the target electronic equipment can be subjected to data transmission in one or more connection modes after being paired, the data transmission mode is more flexible and convenient, and effective data transmission is realized.
The foregoing description of specific embodiments of the present disclosure has been described. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Fig. 8 is a schematic structural diagram of an electronic device according to a pairing method for a head-mounted electronic device of the present application. Referring to fig. 8, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 8, but that does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs the computer program to form the pairing device for the head-mounted electronic equipment on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations: acquiring at least one to-be-processed image, wherein the to-be-processed image is acquired by an image acquisition device of the head-mounted electronic equipment by acquiring a display screen of the target electronic equipment, the display screen is in the visual field of a user wearing the head-mounted electronic equipment, and the to-be-processed image is used for pairing between the head-mounted electronic equipment and the target electronic equipment; aiming at an image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed to obtain target characteristic information of a target object; in response to determining that the obtained target characteristic information matches the pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device.
The method performed by the pairing apparatus for a head-mounted electronic device according to the embodiment of fig. 8 of the present disclosure may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method incorporating the present disclosure may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules within a decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may also perform the methods of fig. 1 to 7, and implement the functions of the pairing apparatus for the head-mounted electronic device in the embodiments shown in fig. 1 to 7, which are not described herein again.
Of course, besides the software implementation, the electronic device of the present disclosure does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution main body of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
The disclosed embodiments also propose a computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a portable electronic device comprising a plurality of application programs, are capable of causing the portable electronic device to perform the method of the embodiments shown in fig. 1-7, and in particular to perform the following: acquiring at least one to-be-processed image, wherein the to-be-processed image is acquired by an image acquisition device of the head-mounted electronic equipment by acquiring a display screen of the target electronic equipment, the display screen is in the visual field of a user wearing the head-mounted electronic equipment, and the to-be-processed image is used for pairing between the head-mounted electronic equipment and the target electronic equipment; aiming at an image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed to obtain target characteristic information of a target object; in response to determining that the obtained target characteristic information matches the pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device.
Fig. 9 is a schematic structural diagram of a pairing apparatus 90 for a pairing method of a head-mounted electronic device according to the present application. Referring to fig. 9, in a software implementation, the pairing apparatus 90 for a head-mounted electronic device may include: an acquisition module 91, an identification module 92 and a pairing module 93, wherein:
the acquiring module 91 acquires at least one to-be-processed image, wherein the to-be-processed image is acquired by an image acquisition device of the head-mounted electronic device and acquired by a display screen of the target electronic device, the display screen is in a visual field of a user wearing the head-mounted electronic device, and the to-be-processed image is used for pairing between the head-mounted electronic device and the target electronic device; the identification module 92 is configured to perform target object identification on the image to be processed in the at least one image to be processed to obtain target feature information of the target object; the pairing module 93 pairs the head-mounted electronic device with the target electronic device in response to determining that the obtained target characteristic information matches the pre-stored characteristic information.
Optionally, the acquiring module 91 acquires at least one image to be processed, including: and acquiring at least one to-be-processed image in response to determining that the head-mounted electronic equipment triggers a preset pairing condition.
Optionally, the obtaining module 91 obtains at least one to-be-processed image in response to determining that the head-mounted electronic device triggers the preset pairing condition, including: generating prompt information in response to the fact that the preset pairing condition is triggered by the head-mounted electronic equipment, wherein the prompt information is used for prompting a user that the head-mounted electronic equipment and the target electronic equipment meet the pairing condition; and responding to the equipment pairing instruction received within the preset time length, and acquiring at least one to-be-processed image.
Optionally, the pairing conditions include at least one of: the head-mounted electronic equipment and the target electronic equipment are connected to the same wireless fidelity WiFi; the head-mounted electronic equipment and the target electronic equipment are provided with communication service by the same base station; the head-mounted electronic equipment and the target electronic equipment are positioned in the same designated area; the head-mounted electronic device and the target electronic device are in the same audio environment.
Optionally, the acquiring module 91 acquires at least one image to be processed, including: controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of a display screen of the target electronic equipment according to a preset frequency to obtain an image to be processed, wherein the preset frequency is determined based on the change frequency of a target object in the display screen of the target electronic equipment; and controlling the image acquisition device to stop image acquisition in response to the fact that the number of the obtained images to be processed is not smaller than the preset number, and acquiring at least one image to be processed from the obtained images to be processed.
Optionally, the identifying module 92, for a to-be-processed image in the at least one to-be-processed image, performs target object identification on the to-be-processed image to obtain target feature information of the target object, including: the method comprises the steps of preprocessing an image to be processed in at least one image to be processed; carrying out graph detection on the preprocessed image to be processed, and determining a graph area containing a graph in the image to be processed; carrying out pattern recognition and feature extraction on the pattern area to obtain pattern feature information of the pattern; and determining the graphic characteristic information as target characteristic information.
Optionally, the pre-stored feature information includes pre-stored graphic feature information; the pairing module 93 pairs the head-mounted electronic device with the target electronic device in response to determining that the obtained target feature information matches the pre-stored feature information, including: determining the similarity between the target characteristic information and the pre-stored graphic characteristic information; in response to determining that the similarity is greater than or equal to the first threshold, determining that the target feature information matches pre-stored graph feature information; pairing the head-mounted electronic device with the target electronic device.
Optionally, the identifying module 92, for a to-be-processed image in the at least one to-be-processed image, performs target object identification on the to-be-processed image to obtain target feature information of the target object, including: the method comprises the steps of preprocessing an image to be processed in at least one image to be processed; performing character detection on the preprocessed image to be processed, and determining a character area containing character information in the image to be processed; performing text recognition and feature extraction on the character area to obtain text feature information of the text; and determining the text characteristic information as target characteristic information.
Optionally, the pre-stored feature information includes pre-stored text feature information; the pairing module 93 pairs the head-mounted electronic device with the target electronic device in response to determining that the obtained target feature information matches the pre-stored feature information, including: determining the similarity between the text characteristic information and pre-stored text characteristic information; in response to determining that the similarity is greater than or equal to the second threshold, determining that the text feature information matches pre-stored text feature information; pairing the head-mounted electronic device with the target electronic device.
Optionally, the acquiring module 91 acquires at least one image to be processed, including: in response to the fact that the spatial position of the head-mounted electronic equipment is changed, controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of a display screen of the target electronic equipment; and acquiring an image acquired by the image acquisition device as at least one image to be processed.
Optionally, the target feature information includes position information of the target object in a field of view of the head-mounted electronic device; the identifying module 92 identifies a target object of at least one image to be processed in the image to be processed to obtain target feature information of the target object, including: aiming at an image to be processed in at least one image to be processed, carrying out target object identification on the image to be processed; performing feature extraction on the identified result to obtain position feature information and/or graphic feature information of the target object, wherein the position feature information of the target object is used for representing the target position of the target object in the visual field of the head-mounted electronic equipment; and determining the obtained position characteristic information and/or the obtained graphic characteristic information as target characteristic information.
Optionally, the pre-stored feature information includes pre-stored location feature information, where the pre-stored location feature information is used to represent a preset location of a field of view of the head-mounted electronic device; the pairing module 93 pairs the head-mounted electronic device with the target electronic device in response to determining that the obtained target feature information matches the pre-stored feature information, including: determining the distance between the target position and a preset position based on the position characteristic information of the target object and the pre-stored position characteristic information; determining that the target characteristic information is matched with the pre-stored characteristic information in response to the determined distance being smaller than the preset distance; pairing the head-mounted electronic device with the target electronic device.
Optionally, the pre-stored feature information further includes pre-stored graphic feature information; the pairing module 93 determines similarity between the graphic feature information of the target object and the pre-stored graphic feature information before determining that the target feature information matches the pre-stored feature information in response to the determined distance being less than the preset distance; in response to the determined similarity being greater than or equal to a third threshold, determining that the graphical feature information of the target object matches pre-stored graphical feature information.
Optionally, the pairing module 93 establishes a data connection between the head-mounted electronic device and the target electronic device for data transmission after pairing the head-mounted electronic device and the target electronic device, where the data connection includes at least one of a wireless connection, a wired connection, a bluetooth connection, and a near field communication NFC connection.
The pairing device 90 provided in the present disclosure may also execute the method in fig. 1 to 7, and implement the functions of the pairing device 90 in the embodiments shown in fig. 1 to 7, which are not described herein again.
In short, the above is only a preferred embodiment of the disclosure, and is not intended to limit the scope of the disclosure. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes several instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present disclosure.
While the present disclosure has been described with reference to the embodiments illustrated in the drawings, which are intended to be illustrative rather than restrictive, it will be apparent to those of ordinary skill in the art in light of the present disclosure that many more modifications may be made without departing from the spirit of the disclosure and the scope of the appended claims.

Claims (17)

1. A pairing method for a head-mounted electronic device, comprising:
acquiring at least one to-be-processed image, wherein the to-be-processed image is acquired by an image acquisition device of the head-mounted electronic equipment through a display screen of target electronic equipment, the display screen is in a visual field of a user wearing the head-mounted electronic equipment, and the to-be-processed image is used for pairing between the head-mounted electronic equipment and the target electronic equipment;
aiming at an image to be processed in the at least one image to be processed, carrying out target object identification on the image to be processed to obtain target characteristic information of the target object;
in response to determining that the obtained target characteristic information matches pre-stored characteristic information, pairing the head-mounted electronic device with the target electronic device.
2. The method of claim 1, wherein said acquiring at least one image to be processed comprises:
and acquiring the at least one image to be processed in response to determining that the head-mounted electronic equipment triggers a preset pairing condition.
3. The method of claim 2, wherein the acquiring the at least one image to be processed in response to determining that the head-mounted electronic device triggers a preset pairing condition comprises:
generating prompt information in response to determining that the head-mounted electronic equipment triggers a preset pairing condition, wherein the prompt information is used for prompting a user that the head-mounted electronic equipment and the target electronic equipment meet the pairing condition;
and responding to the equipment pairing instruction received within the preset time length, and acquiring the at least one image to be processed.
4. The method of claim 2, wherein the pairing conditions include at least one of:
the head-mounted electronic device and the target electronic device are connected to the same wireless fidelity (WiFi);
the head-mounted electronic device and the target electronic device are provided with communication service by the same base station;
the head-mounted electronic equipment and the target electronic equipment are positioned in the same designated area;
the head-mounted electronic device and the target electronic device are in the same audio environment.
5. The method of claim 1, wherein said acquiring at least one image to be processed comprises:
controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of a display screen of the target electronic equipment according to a preset frequency to obtain the image to be processed, wherein the preset frequency is determined based on the change frequency of a target object in the display screen of the target electronic equipment;
and controlling the image acquisition device to stop image acquisition in response to the fact that the number of the obtained images to be processed is not smaller than the preset number, and acquiring at least one image to be processed from the obtained images to be processed.
6. The method as claimed in claim 1, wherein the performing, for the image to be processed in the at least one image to be processed, target object recognition on the image to be processed to obtain target feature information of the target object includes:
for the image to be processed in the at least one image to be processed, preprocessing the image to be processed;
carrying out graph detection on the preprocessed image to be processed, and determining a graph area containing a graph in the image to be processed;
carrying out pattern recognition and feature extraction on the pattern area to obtain pattern feature information of the pattern;
and determining the graphic characteristic information as the target characteristic information.
7. The method of claim 6, wherein the pre-stored feature information comprises pre-stored graphic feature information;
the pairing the head-mounted electronic device with the target electronic device in response to determining that the obtained target feature information matches pre-stored feature information, comprising:
determining the similarity between the target characteristic information and the pre-stored graphic characteristic information;
in response to determining that the similarity is greater than or equal to a first threshold, determining that the target feature information matches the pre-stored graph feature information;
pairing the head-mounted electronic device with the target electronic device.
8. The method as claimed in claim 1, wherein the performing, for the image to be processed in the at least one image to be processed, target object recognition on the image to be processed to obtain target feature information of the target object includes:
for the image to be processed in the at least one image to be processed, preprocessing the image to be processed;
performing character detection on the preprocessed image to be processed, and determining a character area containing character information in the image to be processed;
performing text recognition and feature extraction on the character area to obtain text feature information of the text;
and determining the text characteristic information as the target characteristic information.
9. The method of claim 8, wherein the pre-stored feature information comprises pre-stored textual feature information;
the pairing the head-mounted electronic device with the target electronic device in response to determining that the obtained target feature information matches pre-stored feature information, comprising:
determining the similarity between the text characteristic information and the pre-stored text characteristic information;
in response to determining that the similarity is greater than or equal to a second threshold, determining that the text feature information matches the pre-stored text feature information;
pairing the head-mounted electronic device with the target electronic device.
10. The method of claim 1, wherein said acquiring at least one image to be processed comprises:
in response to the fact that the spatial position of the head-mounted electronic equipment is changed, controlling an image acquisition device of the head-mounted electronic equipment to acquire an image of a display screen of the target electronic equipment;
and acquiring the image acquired by the image acquisition device as the at least one image to be processed.
11. The method of claim 1, wherein the target feature information includes position information of the target object in a field of view of the head mounted electronic device;
the identifying a target object of the image to be processed aiming at the image to be processed in the at least one image to be processed to obtain the target characteristic information of the target object comprises the following steps:
aiming at the image to be processed in the at least one image to be processed, carrying out target object identification on the image to be processed;
performing feature extraction on the identified result to obtain position feature information and/or graphic feature information of the target object, wherein the position feature information of the target object is used for representing a target position of the target object in a visual field of the head-mounted electronic device;
and determining the obtained position characteristic information and/or the obtained graphic characteristic information as the target characteristic information.
12. The method of claim 11, wherein the pre-stored characteristic information comprises pre-stored location characteristic information, wherein the pre-stored location characteristic information is used to characterize a preset location in a field of view of the head-mounted electronic device;
the pairing the head-mounted electronic device with the target electronic device in response to determining that the obtained target feature information matches pre-stored feature information, comprising:
determining a distance between the target position and the preset position based on the position characteristic information of the target object and pre-stored position characteristic information;
determining that the target characteristic information is matched with the pre-stored characteristic information in response to the determined distance being smaller than a preset distance;
pairing the head-mounted electronic device with the target electronic device.
13. The method of claim 11, wherein the pre-stored feature information further comprises pre-stored graphical feature information;
before determining that the target feature information and the pre-stored feature information match in response to the determined distance being less than a preset distance, the method further comprises:
determining the similarity between the graphic characteristic information of the target object and the pre-stored graphic characteristic information;
in response to the determined similarity being greater than or equal to a third threshold, determining that the graphical feature information of the target object matches the pre-stored graphical feature information.
14. The method of claim 1, wherein upon pairing the head-mounted electronic device with the target electronic device, the method further comprises:
and establishing a data connection between the head-mounted electronic equipment and the target electronic equipment for data transmission, wherein the data connection comprises at least one of a wireless connection, a wired connection, a Bluetooth connection and a Near Field Communication (NFC) connection.
15. A pairing apparatus for a head-mounted electronic device, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring at least one image to be processed, the image to be processed is acquired by an image acquisition device of the head-mounted electronic equipment through a display screen of target electronic equipment, the display screen is in the visual field of a user wearing the head-mounted electronic equipment, and the image to be processed is used for pairing the head-mounted electronic equipment and the target electronic equipment;
the identification module is used for identifying a target object of the image to be processed in the at least one image to be processed so as to obtain target characteristic information of the target object;
and the pairing module is used for pairing the head-mounted electronic equipment and the target electronic equipment in response to the fact that the obtained target characteristic information is matched with the pre-stored characteristic information.
16. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the method of any one of claims 1-14.
17. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to any one of claims 1-14.
CN202011638457.7A 2020-12-31 2020-12-31 Pairing method and apparatus for head-mounted electronic device Pending CN112668589A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011638457.7A CN112668589A (en) 2020-12-31 2020-12-31 Pairing method and apparatus for head-mounted electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011638457.7A CN112668589A (en) 2020-12-31 2020-12-31 Pairing method and apparatus for head-mounted electronic device

Publications (1)

Publication Number Publication Date
CN112668589A true CN112668589A (en) 2021-04-16

Family

ID=75413679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011638457.7A Pending CN112668589A (en) 2020-12-31 2020-12-31 Pairing method and apparatus for head-mounted electronic device

Country Status (1)

Country Link
CN (1) CN112668589A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103368617A (en) * 2013-06-28 2013-10-23 东莞宇龙通信科技有限公司 Intelligent equipment interactive system and intelligent equipment interactive method
CN104182051A (en) * 2014-08-29 2014-12-03 百度在线网络技术(北京)有限公司 Headset intelligent device and interactive system with same
US20150319554A1 (en) * 2014-04-30 2015-11-05 Broadcom Corporation Image triggered pairing
US20160037563A1 (en) * 2014-07-30 2016-02-04 Motorola Mobility Llc Connecting wireless devices using visual image capture and processing
CN105323708A (en) * 2014-07-29 2016-02-10 三星电子株式会社 Mobile device and method of pairing the same with electronic device
KR20170112527A (en) * 2016-03-31 2017-10-12 엘지전자 주식회사 Wearable device and method for controlling the same
WO2020132831A1 (en) * 2018-12-24 2020-07-02 Intel Corporation Systems and methods for pairing devices using visual recognition
CN111918118A (en) * 2019-05-08 2020-11-10 三星电子株式会社 Electronic device, user terminal, and method of controlling electronic device and user terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103368617A (en) * 2013-06-28 2013-10-23 东莞宇龙通信科技有限公司 Intelligent equipment interactive system and intelligent equipment interactive method
US20150319554A1 (en) * 2014-04-30 2015-11-05 Broadcom Corporation Image triggered pairing
CN105323708A (en) * 2014-07-29 2016-02-10 三星电子株式会社 Mobile device and method of pairing the same with electronic device
US20160037563A1 (en) * 2014-07-30 2016-02-04 Motorola Mobility Llc Connecting wireless devices using visual image capture and processing
CN104182051A (en) * 2014-08-29 2014-12-03 百度在线网络技术(北京)有限公司 Headset intelligent device and interactive system with same
KR20170112527A (en) * 2016-03-31 2017-10-12 엘지전자 주식회사 Wearable device and method for controlling the same
WO2020132831A1 (en) * 2018-12-24 2020-07-02 Intel Corporation Systems and methods for pairing devices using visual recognition
CN111918118A (en) * 2019-05-08 2020-11-10 三星电子株式会社 Electronic device, user terminal, and method of controlling electronic device and user terminal

Similar Documents

Publication Publication Date Title
CN110688951B (en) Image processing method and device, electronic equipment and storage medium
KR102211641B1 (en) Image segmentation and modification of video streams
US10095949B2 (en) Method, apparatus, and computer-readable storage medium for area identification
KR102248474B1 (en) Voice command providing method and apparatus
TW202036464A (en) Text recognition method and apparatus, electronic device, and storage medium
CN110162670B (en) Method and device for generating expression package
KR102488563B1 (en) Apparatus and Method for Processing Differential Beauty Effect
US10438086B2 (en) Image information recognition processing method and device, and computer storage medium
CN104992096B (en) A kind of data guard method and mobile terminal
CN111696176B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN107578459A (en) Expression is embedded in the method and device of candidates of input method
US12008167B2 (en) Action recognition method and device for target object, and electronic apparatus
KR20150059466A (en) Method and apparatus for recognizing object of image in electronic device
US20170153698A1 (en) Method and apparatus for providing a view window within a virtual reality scene
CN110781813B (en) Image recognition method and device, electronic equipment and storage medium
CN111709414A (en) AR device, character recognition method and device thereof, and computer-readable storage medium
CN113065591B (en) Target detection method and device, electronic equipment and storage medium
CN107220614B (en) Image recognition method, image recognition device and computer-readable storage medium
JPWO2018179222A1 (en) Computer system, screen sharing method and program
WO2023005813A1 (en) Image direction adjustment method and apparatus, and storage medium and electronic device
KR20210036039A (en) Electronic device and image processing method thereof
CN107977636B (en) Face detection method and device, terminal and storage medium
CN110633715B (en) Image processing method, network training method and device and electronic equipment
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
EP3461138A1 (en) Processing method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination