WO2022052956A1 - 图像处理方法、装置及电子设备 - Google Patents

图像处理方法、装置及电子设备 Download PDF

Info

Publication number
WO2022052956A1
WO2022052956A1 PCT/CN2021/117243 CN2021117243W WO2022052956A1 WO 2022052956 A1 WO2022052956 A1 WO 2022052956A1 CN 2021117243 W CN2021117243 W CN 2021117243W WO 2022052956 A1 WO2022052956 A1 WO 2022052956A1
Authority
WO
WIPO (PCT)
Prior art keywords
graphic
image
character
main body
length
Prior art date
Application number
PCT/CN2021/117243
Other languages
English (en)
French (fr)
Inventor
王和严
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Priority to EP21866009.0A priority Critical patent/EP4202607A4/en
Publication of WO2022052956A1 publication Critical patent/WO2022052956A1/zh
Priority to US18/119,816 priority patent/US20230215200A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/08Access security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • G06F21/445Program or device authentication by mutual authentication, e.g. between devices or programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0876Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks

Definitions

  • the embodiments of the present application relate to the field of communications technologies, and in particular, to an image processing method, an apparatus, and an electronic device.
  • wireless connections can be established between multiple devices.
  • smart wearable devices such as smart bracelets and smart watches can establish wireless connections and transmit data between mobile phones.
  • a device can be identified by identity information such as a media access control (MAC) address.
  • identity information such as a media access control (MAC) address.
  • the user can trigger the device 1 to generate and display an identity image, and scan the identity image with the device 2 frontally, so that the device 2 can read the identity image of the device 1 from the identity image Identity Information.
  • various data can be transmitted between the device 1 and the device 2.
  • the device 1 may shake, for example, the smart watch will shake as the user's arm moves, the angle of the identity image collected by the device 2 will be different. changes so that the identity image cannot be recognized and the device's identity information cannot be read.
  • the purpose of the embodiments of the present application is to provide an image processing method, apparatus and electronic device, which can solve the problem that the scanning device cannot recognize the identity of the scanned device due to shaking of the scanned device.
  • an embodiment of the present application provides an image processing method.
  • the method includes: acquiring a target image, where the target image is an image obtained by photographing a dynamic image displayed by the first device by the second device, where the dynamic image is used to indicate configuration information of the first device, and the first device has a first posture ; Identify the figure main body of the first figure and the figure auxiliary body of the first figure, the first figure is a figure in the target image; The position of the figure auxiliary body of the first figure on the figure main body of the first figure In the case of belonging to the first preset interval, a first character corresponding to the first graphic is determined, and the first preset interval corresponds to the first gesture; the first device is recognized based on the first character.
  • an embodiment of the present application provides an image processing apparatus.
  • the image processing device includes an acquisition module, an identification module and a determination module.
  • an acquisition module configured to acquire a target image
  • the target image is an image obtained by photographing a dynamic image displayed by the first device by the second device
  • the dynamic image is used to indicate configuration information of the first device
  • the first device has the first device attitude.
  • the identifying module is used for identifying the graphic main body of the first graphic and the graphic auxiliary body of the first graphic, where the first graphic is a graphic in the target image acquired by the acquiring module.
  • the determining module is configured to determine the first character corresponding to the first graphic when the position of the graphic auxiliary body of the first graphic on the graphic main body of the first graphic belongs to the first preset interval. The preset interval corresponds to the first posture.
  • the identifying module is further configured to identify the first device based on the first character determined by the determining module.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and executable on the processor, the program or instruction being executed by the processor When executed, the steps of the method as provided in the first aspect are implemented.
  • an embodiment of the present application provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, the steps of the method provided in the first aspect are implemented.
  • an embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement the method provided in the first aspect.
  • a target image may be acquired, where the target image is an image obtained by photographing a dynamic image displayed by the first device by the second device, and the dynamic image is used to indicate configuration information of the first device. It has a first posture; recognizes the main body of the first figure and the figure auxiliary of the first figure, the first figure is a figure in the target image; the figure auxiliary of the first figure is in the figure of the first figure When the position on the main body belongs to a first preset interval, a first character corresponding to the first graphic is determined, and the first preset interval corresponds to the first posture; the first device is recognized based on the first character.
  • the embodiment of the present application can identify the identity of the scanned device when the scanned device is shaken.
  • FIG. 1 is a schematic diagram of an image generation method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a character and graphics provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a dynamic image provided by an embodiment of the present application.
  • FIG. 4 is one of the schematic diagrams of an image processing method provided by an embodiment of the present application.
  • FIG. 6 is the second schematic diagram of the scanning dynamic image provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of determining a character according to a target image according to an embodiment of the present application.
  • FIG. 8 is the second schematic diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 10 is one of the schematic hardware diagrams of the electronic device provided by the embodiment of the application.
  • FIG. 11 is the second schematic diagram of the hardware of the electronic device provided by the embodiment of the present application.
  • first, second and the like in the description and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the present application can be practiced in sequences other than those illustrated or described herein, and distinguish between “first”, “second”, etc.
  • Objects are usually of one type, and the number of objects is not limited.
  • the first object may be one or multiple.
  • “and/or” in the description and claims indicates at least one of the connected objects, and the character “/" generally indicates that the associated objects are in an "or” relationship.
  • the embodiments of the present application provide an image generation method, an image processing method, an image processing apparatus, and an electronic device, which can acquire a target image, where the target image is an image obtained by capturing a dynamic image displayed by a first device by a second device.
  • the dynamic image is used to indicate the configuration information of the first device, and the first device has a first posture; identify the graphic main body of the first graphic and the graphic auxiliary body of the first graphic, and the first graphic is a graphic in the target image; In the case where the position of the graphic auxiliary body of the first graphic on the graphic main body of the first graphic belongs to a first preset interval, determine the first character corresponding to the first graphic, the first preset interval and the first character A gesture corresponds; the first device is recognized based on the first character.
  • the embodiment of the present application can identify the identity of the scanned device when the scanned device is shaken.
  • an embodiment of the present application provides an image generation method.
  • the method may be applied to the first device, which may also be referred to as a scanned device, such as a smart wearable device.
  • the method may include S101 to S103 described below.
  • the first device acquires configuration information of the first device, where the configuration information includes K characters.
  • K is a positive integer.
  • the configuration information of the first device may be used to indicate the identity of the first device.
  • the above configuration information may be static configuration information.
  • the static configuration information may be factory-configured for the first device or pre-configured for a subscriber identity module (SIM) card of the first device.
  • SIM subscriber identity module
  • the above configuration information may be dynamic configuration information, for example, register, attach, multi-mode reselection (inter-RAT cell reselection), and multi-mode handover (inter-RAT handover) occur in each first device ) and registration update (registration update), the network device may allocate configuration information to the first device through a downlink message (for example, a system message).
  • a downlink message for example, a system message.
  • the configuration information of the first device may include at least one of the following: a media access control (media access control, MAC) address; an international mobile equipment identity (IMEI); an integrated circuit card identification code (integrate circuit card identity, ICCID); mobile equipment identifier (MEID); international mobile subscriber identification number (IMSI); 5G-S-temporary mobile subscription identifier (5G S-temporary mobile subscription identifier) , 5G-S-TMSI); full inactive-radio network temporary identifier (FullI-RNTI).
  • a media access control media access control
  • IMEI international mobile equipment identity
  • ICCID integrated circuit card identification code
  • MEID mobile equipment identifier
  • IMSI international mobile subscriber identification number
  • 5G-S-temporary mobile subscription identifier 5G-S-temporary mobile subscription identifier
  • 5G-S-TMSI full inactive-radio network temporary identifier
  • FullI-RNTI full inactive-radio network temporary identifier
  • the configuration information of the first device may be stored in the first device, or in a network device where the first device resides on a network, or in a cloud server, which is not limited in this embodiment of the present application.
  • the above K characters may include T character strings, and each character string in the T character strings may include at least one character, that is, all characters of the T character strings may be combined into the K characters.
  • the T character strings may be binary, quaternary, hexadecimal, octal, decimal, hexadecimal, tertiary, hexadecimal, or other possible bases.
  • T is a positive integer.
  • the MAC address of the first device is A8-9C-ED-7D-CC-9E in hexadecimal, wherein the first three character strings A8-9C-ED can be used to indicate the company of the first device , the last three character strings 7D-CC-9E can be used to indicate the model type of the first device.
  • the configuration information of the first device may be acquired in any of the following scenarios:
  • Scenario 1 The first device receives a first input from the user, and in response to the first input, acquires configuration information of the first device.
  • Scenario 2 The first device automatically acquires configuration information of the first device when a preset condition is met.
  • the preset condition may be registration, attachment, multi-mode reselection, multi-mode switching, or registration update of the first device.
  • the first device For each character in the K characters, the first device generates a graphic corresponding to the character according to the main element corresponding to the character and the auxiliary element corresponding to the character.
  • the first device may generate a The graphics corresponding to the characters, get a graphics group corresponding to each string.
  • each of the K characters may correspond to one main element and one auxiliary element.
  • the main element corresponding to a character can be used to define the main body of a figure, that is, the figure main body of a figure, including the shape and size of the figure main body;
  • the auxiliary element corresponding to a character can distinguish different values, that is, the figure auxiliary element of a figure body, including the number of graphic auxiliary bodies and the position of graphic auxiliary bodies on the main body of the figure.
  • the main element of each character in the string usually does not change; while the sub-element of each character in the string will change cyclically, but the element is single, for example, the sub-element can define different values by changing the position and the number value.
  • any one of the K characters can be obtained in the following manner:
  • the first device obtains a graphic body of a graphic according to a main element corresponding to a character.
  • the first device obtains a graphic auxiliary body of a figure according to the auxiliary element corresponding to a character.
  • the first device generates a graphic corresponding to a character according to the graphic main body of the one graphic and the graphic auxiliary body of the one graphic.
  • the main element corresponding to character 0 is used to indicate an arc, that is, the main body of the graph is an arc corresponding to character 0 as shown in Figure 2; the auxiliary element corresponding to character 0 is used to indicate the first end of the arc A circle point of the point, that is, a circle point whose graphic auxiliary body is the first endpoint of the arc.
  • the main element corresponding to character 1 is used to indicate an arc, that is, the main body of the graphic is an arc corresponding to character 1 as shown in Figure 2;
  • the auxiliary element corresponding to character 1 is used to indicate the distance from the first end of the arc
  • a point 1/3 of the length of the arc, that is, the auxiliary body of the graph is a point 1/3 of the length of the arc from the first endpoint of the arc.
  • the main element corresponding to character 2 is used to indicate an arc, that is, the main body of the graph is an arc corresponding to character 2 as shown in Figure 2; the auxiliary element corresponding to character 2 is used to indicate the distance from the first end of the arc Point is a point 1/2 the length of the arc, that is, the graphic auxiliary body is a point that is 1/2 the length of the arc from the first endpoint of the arc.
  • the main element corresponding to character 3 is used to indicate an arc, that is, the main body of the graphic is an arc corresponding to character 3 as shown in Figure 2; the auxiliary element corresponding to character 3 is used to indicate the second endpoint of the arc A dot of , that is, the auxiliary body of the graph is a dot located at the second endpoint of the arc.
  • the main element corresponding to character 4 is used to indicate an arc, that is, the main body of the graphic is an arc corresponding to character 4 as shown in Figure 2; the auxiliary element corresponding to character 4 is used to indicate the second endpoint of the arc.
  • the two dots of that is, the auxiliary body of the graph is the two dots located at the second endpoint of the arc.
  • the main element corresponding to character 5 is used to indicate an arc, that is, the main body of the graphic is an arc corresponding to character 5 as shown in Figure 2;
  • the auxiliary element corresponding to character 5 is used to indicate the first end of the arc
  • One circle point of the point and two circle points located at the second end point of the arc, that is, the graphic auxiliary body is one circle point located at the first end point of the arc line and two circle points located at the second end point of the arc line.
  • the main element corresponding to character 6 is used to indicate an arc, that is, the main body of the graph is an arc corresponding to character 6 as shown in Figure 2; the auxiliary element corresponding to character 6 is the first endpoint 1 of the distance arc /3 A dot of the arc length and two dots located at the second endpoint of the arc, that is, the graphic auxiliary body is a dot 1/3 of the arc length from the first endpoint of the arc and located at the arc The two dots of the second endpoint of .
  • the main element corresponding to character 7 is used to indicate an arc, that is, the main body of the graph is an arc corresponding to character 7 as shown in Figure 7;
  • the auxiliary element corresponding to character 7 is used to indicate the distance from the first end of the arc A point 1/2 the length of the arc and two dots located at the second endpoint of the arc, that is, the graphic auxiliary is a point 1/2 the length of the arc from the first endpoint of the arc and located at the second endpoint of the arc.
  • the main element corresponding to character 8 is used to indicate an arc, that is, the main body of the graphic is an arc corresponding to character 8 as shown in Figure 2; the auxiliary element corresponding to character 8 is used to indicate the second endpoint located in the arc
  • the three dots of that is, the auxiliary body of the graph is the three dots located at the second endpoint of the arc.
  • the main element corresponding to character 9 is used to indicate an arc, that is, the main body of the graphic is an arc corresponding to character 9 as shown in Figure 2; the auxiliary element corresponding to character 9 is used to indicate the first end of the arc One circle point of the point and three circle points located at the second end point of the arc, that is, the graphic auxiliary body is one circle point located at the first end point of the arc line and three circle points located at the second end point of the arc line.
  • the above embodiments are all illustrative, and do not limit the embodiments of the present application. It can be understood that, the above embodiments are exemplified by character 0-character 9 for illustrative description, and may also be any other possible characters.
  • the graphic main body of a graphic may also be other graphics other than the above-mentioned arc, and the graphic auxiliary body of an image may also be other graphics other than the above-mentioned dots.
  • S103 The first device arranges the K graphics corresponding to the K characters to different regions to generate a dynamic image.
  • the above dynamic image may be used to indicate configuration information of the first device.
  • the image generated and displayed by the first device is a dynamic image, that is, the first The device can switch and display multiple dynamic images at a preset frequency to generate a certain dynamic effect, and each dynamic image in the multiple dynamic images can be used to indicate the configuration information of the first device. In this way, the scanning mode can be improved. Diversity.
  • each The arrangement position of each graphic in the graphic group may be determined according to a preset arrangement rule. Therefore, after the first device acquires the T graphics groups, it can arrange each graphics group in the T graphics groups according to the preset arrangement rules, respectively, in an arrangement area corresponding to each graphics group in a layer, Finally get a dynamic image.
  • the image processing method provided in this embodiment of the present application may further include:
  • the first device converts K initial characters from a first format to a second format, and obtains K target characters, where the second format is a preset format.
  • a graph corresponding to one target character is generated according to the main element corresponding to one target character and the auxiliary element corresponding to one target character.
  • the first device arranges the K graphics corresponding to the K target characters to different regions to generate a dynamic image.
  • the K figures may be arranged in an area where a plurality of concentric circles are located, and each circular ring may include a direction indicator (eg, an arrow) and at least one figure.
  • the M arcs in each ring can be displayed in whole or in part; or, part of the M arcs in each ring is the first display mode, and the M arcs in each ring Other parts of the arc are the second display mode.
  • the first display manner and the second display manner may be different colors of the arcs, different line types of the arcs, or different thicknesses of the arcs.
  • FIG. 3 is a schematic diagram of a dynamic image provided by an embodiment of the present application.
  • the first device converts 6 initial strings: A8-9C-ED-7D-CC-9E into 6 target strings: 168-156-237-125-204-158
  • the first device can first arrange the character string 168 and the character string 156 to the first inner ring respectively, and the arcs of the character 1, the character 8 and the character 5 are shown in real numbers. Lines are indicated, and the arcs of character 6, character 1, and character 6 are indicated by solid lines.
  • the first device can arrange the string 237 and the string 125 to the second inner ring, respectively, the arcs of character 2, character 7 and character 2 are represented by solid lines, the arc of character 3, character 1 and character 5 Lines are indicated by solid lines. Then, the first device may arrange the character string 204 and the character string 158 respectively to the third outer ring, the arcs of the character 2, the character 4 and the character 5 are represented by dashed lines, and the arcs of the character 0, the character 1 and the character 8 are represented by dashed lines. Lines are indicated by solid lines. After arranging these strings to different positions of the three rings, a dynamic image as shown in Figure 3 can be obtained. Finally, the first device can switch and display multiple dynamic images according to a preset frequency.
  • the multiple concentric circles of the i+1-th image are all rotated by a preset angle, that is, the dynamic switching and display of multiple images can Generates a dynamic effect of multiple concentric circles rotating at a preset angular velocity.
  • the embodiment of the present application provides an image generation method, because the first device can respectively generate graphics corresponding to each character according to the main element and auxiliary element corresponding to each character in K characters, thereby generating a dynamic image, and then Generate multiple dynamic images, so you can enrich the way the images are generated.
  • the user may aim the camera of the second device at the dynamic image to capture the dynamic image.
  • the second device can obtain a frontal image that is consistent with the dynamic image content, and recognize the first graphic in the frontal image.
  • the main figure of the first figure and the figure auxiliary body of the first figure, and the first character corresponding to the first figure is directly determined according to the figure main body of the first figure and the figure auxiliary body of the first figure, and then the first character is identified based on the first character. equipment.
  • the first device may shake slightly.
  • a smartwatch may wobble as the user's arm rotates left and right, or a smartwatch may wobble as the user's arm moves up and down. Therefore, the angle of the identity image collected by the second device will change, so that the identity image cannot be recognized, and the identity information of the device cannot be read.
  • the embodiments of the present application provide an image processing method.
  • the method may include the following S401 to S404.
  • the method is exemplarily described below by taking the execution subject as an image processing apparatus as an example.
  • the image processing apparatus provided in this embodiment of the present application may be a second device other than the first device, or may be a third device other than the first device and the second device.
  • the image processing apparatus acquires a target image.
  • the above-mentioned target image may be an image obtained by photographing the dynamic image displayed by the first device by the second device.
  • the dynamic image may be used to indicate configuration information of the first device.
  • the first device when the dynamic image displayed by the first device is photographed by the second device, the first device has a first posture.
  • the fact that the first device has the first posture means that the posture of the first device relative to the second device is the first posture.
  • the camera of the second device when the optical axis direction of the camera of the second device is perpendicular to the plane where the dynamic image displayed by the first device is located, the camera of the second device is aligned with the front of the dynamic image, and the first device has a posture at this time;
  • the posture of the first device changes accordingly, and the angle between the optical axis direction of the camera of the second device and the plane where the dynamic image displayed by the first device is located decreases.
  • the first device has another pose.
  • the first device has different postures.
  • the posture of the first device reference may be made to the descriptions in the following embodiments, which will not be repeated here.
  • the second device captures the dynamic image displayed by the first device through the second device to obtain the target image.
  • the first device is a smart watch and the second device is a mobile phone.
  • the display screen of the smart watch displays a dynamic image
  • the user can point the rear camera of the mobile phone at the display screen of the smart watch. If the smart watch shakes at this time, the mobile phone can collect the target image with the deformed graphics.
  • the third device receives the target image sent by the second device.
  • the first device is a smart watch and the second device is a mobile phone.
  • the display screen of the smart watch displays a dynamic image
  • the user can point the rear camera of the mobile phone at the display screen of the smart watch. If the smart watch shakes at this time, the mobile phone can collect the target image with the deformed graphics. Afterwards, the mobile phone can send the target image to the server, so that the server can receive the target image and identify the target image.
  • the image processing apparatus identifies the graphic main body of the first graphic and the graphic auxiliary body of the first graphic.
  • the first graphic is a graphic in the target image.
  • the target image includes at least one image area, each image area includes a direction indicator and at least one graphic, and each graphic includes a graphic main body and at least one graphic auxiliary body.
  • the figure main body of each figure includes the shape and size of the figure main body; the figure auxiliary body of each figure includes the number of figure auxiliary bodies and the position of the figure auxiliary body on the figure main body.
  • a preset recognition algorithm may be stored in the image processing apparatus. After the image processing device acquires the target image, the image processing device may perform step-by-step recognition on at least one image area of the target image according to the preset recognition algorithm. Specifically, when the target image includes multiple image areas, the image processing apparatus may sequentially identify the image areas in the multiple image areas according to the preset identification sequence of the multiple image areas. For example, the first image area may be multiple The first image area to be identified among the image areas.
  • the image processing apparatus determines the first character corresponding to the first graphic.
  • the first preset interval corresponds to the first posture.
  • the image processing device recognizes the graphic main body of the first graphic and the graphic auxiliary body of the first graphic
  • the The figure main body of the first figure obtains the main element of the first figure, and then obtains the auxiliary element of the first figure according to the figure auxiliary body of the first figure, and then according to the main element of the first figure and the auxiliary element of the first figure, determines the relationship with the first figure.
  • a character corresponding to a graphic is a graphic.
  • the embodiment of the present application sets a plurality of preset intervals for describing the relative positional relationship between the graphic auxiliary body and the graphic main body under various postures, and the preset intervals corresponding to each posture are different.
  • the first character corresponding to the first graphic can still be determined according to the graphic main body and graphic auxiliary body of the deformed first graphic.
  • different preset intervals are set for different characters.
  • different preset intervals are set for the same character.
  • the first device shakes slightly, it has a first gesture, and the first preset interval corresponding to the first gesture is 42% to 58%; when the first device shakes violently, it has a second gesture, and the second The second preset interval corresponding to the posture is 40% to 60%.
  • the above S403 may be implemented by the following S1 to S3.
  • the image processing apparatus obtains the projections of the graphic main body of the first graphic and the graphic auxiliary body of the first graphic respectively in the second direction .
  • the second direction is perpendicular to the first direction.
  • the above-mentioned preset angle may be 15°.
  • the preset angle may also be other possible angles, which are not limited in the embodiments of the present application.
  • FIG. 5 is a side view of a dynamic image displayed by a smart watch collected by a scanning device
  • FIG. 5 is a front view of a user wearing the smart watch.
  • the image processing device obtains the projections of the main body of the first figure and the auxiliary figure of the first figure in the Y-axis direction, respectively.
  • the X-axis direction is perpendicular to the Y-axis direction.
  • FIG. 6 is a side view of a dynamic image displayed by a smart watch collected by a scanning device
  • FIG. 6 is a front view of a user wearing the smart watch.
  • the image processing apparatus obtains the projections of the graphic main body of the first graphic and the graphic subsidiary body of the first graphic in the X-axis direction respectively.
  • the X-axis direction is perpendicular to the Y-axis direction.
  • the image processing device determines the first ratio according to the projection of the main body of the first figure and the auxiliary figure of the first figure in the second direction.
  • the description will be continued with reference to the rotation manner shown in FIG. 5 above.
  • the second device collects the frontal image of the dynamic image
  • the first graphic in the frontal image includes: an arc-shaped The figure main body, a dot located between the first end point A1 and the second end point B1 of the figure main body, so the character corresponding to the first figure is 2.
  • the target image collected by the scanning device is an image obtained by perspective when the frontal image is inclined.
  • the image processing device determines the target graphic after performing image correction on the first graphic, and determines the target graphic corresponding to the target graphic according to the preset graphic-character correspondence rule. first character.
  • the above-mentioned preset corresponding rules between graphics and characters may be the corresponding rules as shown in FIG. 2 .
  • the target image may include at least one image area, each image area may include a direction mark and at least one graphic, and each graphic may include a graphic main body and at least one graphic auxiliary body.
  • the image processing method provided in this embodiment of the present application may further include the following S4 , and the above S2 may be implemented by the following S2 a to S2 c .
  • the image processing apparatus determines, according to the direction identifier of the first image area in the at least one image area, the first graphic in the first image area as the graphic to be recognized.
  • the image processing apparatus determines the first end point of the first figure and the second end point of the first figure according to the direction identification of the first image area.
  • the image processing apparatus determines the first length and the second length.
  • the first length is the length between the projection of the first end point and the graphic auxiliary body of the first graphic in the second direction respectively
  • the second length is the first end point and the second end point respectively in the first end point and the second end point. The length between projections in two directions.
  • the image processing apparatus determines the ratio of the first length to the second length as the first ratio.
  • FIG. 7 is still used as an example for illustration.
  • (c) in FIG. 7 is an enlarged view of the first graph shown in (b) in FIG. 7 .
  • the projection of the first end point A1 of the graphic main body in the Y-axis direction is A2
  • the projection of the second end point B1 of the graphic main body in the Y-axis direction is B2.
  • the above-mentioned embodiment is exemplified by setting the first preset interval for the character 2 as an example. It can be understood that different preset intervals can be set for different characters according to actual usage requirements. For example, for character 1 as shown in Figure 2, the second preset interval can be set to be 25% to 41%. At this time, for a graph including an arc and a dot, if the projection ratio belongs to the first If the projection ratio value belongs to the second preset interval, then character 1 is determined. Since different characters are set with different preset intervals when the first device has one posture, different characters can be distinguished.
  • the image processing apparatus recognizes the first device based on the first character.
  • the image processing method provided in this embodiment of the present application may further include: converting the first character from the second format to the first format to obtain the third character.
  • the above S404 may include: the first device recognizes the first device based on the third character.
  • An embodiment of the present application provides an image processing method, in the case where a first device displays a dynamic image, although the angle of the target image obtained by capturing the dynamic image by the second device will change with the posture of the first device
  • the position of the graphic auxiliary body of the first graphic in the target image on the graphic main body belongs to the first preset interval corresponding to the first posture
  • the first graphic corresponding to the first graphic can still be determined.
  • a character is identified, and the first device is identified based on the first character. Therefore, the method can identify the identity of the scanned device when the scanned device is shaken.
  • the first possible situation is that the first device is successfully recognized based on the first character, so other graphics in the target image can be stopped, and the first device and the second device can be established wirelessly.
  • the second possible situation is that the recognition of the first device based on the first character fails, and in this case, it is necessary to continue to recognize other graphics in the target image.
  • the image processing method provided by this embodiment of the present application may also include the following S405 to S407.
  • the image processing apparatus recognizes the graphic main body of the second graphic and the graphic auxiliary body of the second graphic.
  • the second graphic is a graphic other than the first graphic in the target image, that is, the second graphic is different from the first graphic.
  • the above-mentioned "recognizing the graphic main body of the second graphic and the graphic auxiliary body of the second graphic” may specifically include: identifying the graphic main body of the second graphic and the graphic sub-body of the second graphic in the second image area in the at least one image area.
  • Graphical Auxiliary may specifically include: identifying the graphic main body of the second graphic and the graphic sub-body of the second graphic in the second image area in the at least one image area.
  • the second preset interval corresponds to the first posture.
  • the image processing apparatus recognizes the first device based on the first character and the second character.
  • the Mac address of the first device is A8-9C-ED-7D-CC-9E in hexadecimal
  • the string A8-9C-ED is the company identification
  • the string 7D-CC-9E is the device identification.
  • the image processing apparatus may continue to identify the third image area in the at least one image area, for example, the area corresponding to points 4 to 6, to obtain The third graphic group. Then, the image processing apparatus determines a character corresponding to each graphic in the third graphic group, thereby obtaining a character string 237 .
  • the image processing apparatus identifies the first device based on the character string 168-156-237, or identifies the first device based on the character string A8-9C-ED converted into hexadecimal. If the company identification of the first device is identified based on the character string 168-156-237 or the character string A8-9C-ED, the image processing apparatus may display the company information of the first device on the screen, that is, the company identification of the first device success. If the user wants to know the specific situation of the first device, the image processing apparatus can be triggered to continue to recognize the target image; otherwise, the image recognition is stopped.
  • the image processing apparatus can perform step-by-step identification of the graphics in the dynamic image displayed by the first device. Therefore, if the identified information does not meet the requirements, the graphics can be identified in the next step. When the identified information meets the requirements, the pattern recognition can be stopped, so that this identification method is more flexible and more energy-saving.
  • the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the image processing method.
  • the image processing apparatus provided by the embodiments of the present application is described by taking an image processing apparatus executing an image processing method as an example.
  • an embodiment of the present application provides an image processing apparatus 900 .
  • the image processing apparatus includes an acquisition module 901 , an identification module 902 and a determination module 903 .
  • the acquisition module 901 can be used to acquire a target image, where the target image is an image obtained by photographing a dynamic image displayed by the first device by the second device, and the dynamic image is used to indicate configuration information of the first device, and the first device has first gesture.
  • the identifying module 902 can be used to identify the graphic main body of the first graphic and the graphic auxiliary body of the first graphic, where the first graphic is a graphic in the target image acquired by the acquiring module 901 .
  • the determining module 903 can be configured to determine the first character corresponding to the first graphic when the position of the graphic auxiliary body of the first graphic on the graphic main body of the first graphic belongs to the first preset interval. The first preset interval corresponds to the first posture.
  • the identification module 902 may also be configured to identify the first device based on the first character determined by the determination module 903 .
  • the recognition module 902 can also be used to recognize the graphic main body of the second graphic and the graphic auxiliary body of the second graphic when the first device fails to be recognized based on the first character, and the second graphic is in the target image. another graphic.
  • the determining module 903 can also be used to determine the second character corresponding to the second graphic when the position of the graphic auxiliary body of the second graphic on the graphic main body of the second graphic belongs to the second preset interval, The second preset interval corresponds to the first posture.
  • the identification module 902 can also be used to identify the first device based on the first character and the second character.
  • the determining module 903 can be specifically configured to: in the case that the rotation angle of the first device relative to the first direction is less than or equal to the preset angle, obtain the graphic main body of the first graphic and the graphic auxiliary of the first graphic respectively.
  • the projection of the figure in the second direction, the second direction is perpendicular to the first direction; and according to the projection of the figure main body of the first figure and the figure auxiliary body of the first figure in the second direction, determine the first figure ratio; and when the first ratio belongs to the first preset interval, determine the target graphic after image correction is performed on the first graphic, and according to the preset graphic and character correspondence rule, determine the target graphic corresponding to the first character.
  • the target image includes at least one image area, each image area includes a direction indicator and at least one graphic, and each graphic includes a graphic main body and at least one graphic auxiliary body.
  • the determining module 903 can also be used to determine the first image area in the first image area according to the direction identifier of the first image area in at least one image area before identifying the graphic main body of the first graphic and the graphic auxiliary body of the first graphic.
  • a graphic is the graphic to be recognized.
  • the determining module 903 can be specifically configured to determine the first endpoint of the first graphic and the second endpoint of the first graphic according to the direction identification of the first image area; and determine the first length and the second length; and The ratio of the first length to the second length is determined as the first ratio.
  • the first length is the length between the projection of the first end point and the graphic auxiliary body of the first graphic in the second direction, respectively
  • the second length is the first end point and the second end point respectively in the second direction The length between projections on .
  • the acquiring module 901 may specifically be used to capture a dynamic image to obtain a target image.
  • the acquiring module 901 may be specifically configured to receive the target image sent by the second device after the dynamic image is captured by the second device to obtain the target image.
  • An embodiment of the present application provides an image processing apparatus.
  • a first device displays a dynamic image
  • the angle of the target image obtained by capturing the dynamic image by the second device will change with the posture of the first device
  • the device can still determine that the first graphic corresponds to the first graphic and identify the first device based on the first character, therefore, the apparatus can identify the identity of the scanned device when the scanned device shakes.
  • the image processing apparatus in this embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal.
  • the apparatus may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palmtop computer, an in-vehicle electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook, or a personal digital assistant (personal digital assistant).
  • UMPC ultra-mobile personal computer
  • netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • the image processing apparatus in this embodiment of the present application may be an apparatus having an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
  • the image processing apparatus provided in this embodiment of the present application can implement each process implemented by the method embodiments in FIG. 4 to FIG. 8 , and to avoid repetition, details are not repeated here.
  • an embodiment of the present application further provides an electronic device 1000, including a processor 1001, a memory 1002, a program or instruction stored in the memory 1002 and executable on the processor 1001, the program Or when the instruction is executed by the processor 1001, each process of the above-mentioned image processing method embodiments can be implemented, and the same technical effect can be achieved. To avoid repetition, details are not described here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 1100 includes but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, and a processor 1110, etc. part.
  • the electronic device 1100 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 1110 through a power management system, so that the power management system can manage charging, discharging, and power consumption. consumption management and other functions.
  • a power source such as a battery
  • the structure of the electronic device shown in FIG. 11 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than those shown in the figure, or combine some components, or arrange different components, which will not be repeated here. .
  • the processor 1110 can be used to obtain a target image, where the target image is an image obtained by photographing a dynamic image displayed by the first device through the second device, where the dynamic image is used to indicate configuration information of the first device, and the first device Have a first attitude.
  • the processor 1110 can also be used to identify the graphic main body of the first graphic and the graphic auxiliary body of the first graphic, and the position of the graphic subsidiary body of the first graphic on the graphic main body of the first graphic belongs to the first preset In the case of an interval, the first character corresponding to the first graphic is determined, and the first device is identified based on the first character, wherein the first preset interval corresponds to the first posture, and the first graphic is the A graphic in the target image.
  • the processor 1110 may also be configured to identify the graphic main body of the second graphic and the graphic auxiliary body of the second graphic in the case of failure to recognize the first device based on the first character, and perform a graphic representation of the second graphic in the second graphic.
  • the processor 1110 may also be configured to identify the graphic main body of the second graphic and the graphic auxiliary body of the second graphic in the case of failure to recognize the first device based on the first character, and perform a graphic representation of the second graphic in the second graphic.
  • the processor 1110 may also be configured to identify the graphic main body of the second graphic and the graphic auxiliary body of the second graphic in the case of failure to recognize the first device based on the first character, and perform a graphic representation of the second graphic in the second graphic.
  • the processor 1110 may be specifically configured to: in the case that the rotation angle of the first device relative to the first direction is less than or equal to the preset angle, obtain the graphic main body of the first graphic and the graphic auxiliary of the first graphic respectively.
  • the projection of the figure in the second direction, the second direction is perpendicular to the first direction; and according to the projection of the figure main body of the first figure and the figure auxiliary body of the first figure in the second direction, determine the first figure ratio; and when the first ratio belongs to the first preset interval, determine the target graphic after image correction is performed on the first graphic, and according to the preset graphic and character correspondence rule, determine the target graphic corresponding to the first character.
  • the target image includes at least one image area, each image area includes a direction indicator and at least one graphic, and each graphic includes a graphic main body and at least one graphic auxiliary body.
  • the processor 1110 may also be configured to, before identifying the graphic main body of the first graphic and the graphic auxiliary body of the first graphic, determine the first image area in the first image area according to the direction identifier of the first image area in the at least one image area.
  • a graphic is the graphic to be recognized.
  • the processor 1110 can be specifically configured to determine the first endpoint of the first graphic and the second endpoint of the first graphic according to the direction identification of the first image area; and determine the first length and the second length; and The ratio of the first length to the second length is determined as the first ratio.
  • the first length is the length between the projection of the first end point and the graphic auxiliary body of the first graphic in the second direction, respectively
  • the second length is the first end point and the second end point respectively in the second direction The length between projections on .
  • the processor 1110 may be specifically configured to capture the dynamic image through the input unit 1104 to obtain the target image.
  • the processor 1110 may be specifically configured to receive the target image sent by the second device through the radio frequency unit 1101 after the dynamic image is captured by the second device to obtain the target image.
  • An embodiment of the present application provides an electronic device.
  • a first device displays a dynamic image
  • the angle of the target image obtained by capturing the dynamic image by the second device may change with the posture of the first device
  • the electronic device can still determine that it corresponds to the first graphic and identify the first device based on the first character, so the electronic device can identify the identity of the scanned device when the scanned device shakes.
  • the input unit 1104 may include a graphics processing unit (graphics processing unit, GPU) 11041 and a microphone 11042. Such as camera) to obtain still pictures or video image data for processing.
  • the display unit 1106 may include a display panel 11061, which may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 1107 includes a touch panel 11071 and other input devices 11072 .
  • the touch panel 11071 is also called a touch screen.
  • the touch panel 11071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 11072 may include but are not limited to physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • Memory 1109 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems.
  • the processor 1110 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, user interface, and application programs, and the like, and the modem processor mainly processes wireless communication. It can be understood that, the above-mentioned modulation and demodulation processor may not be integrated into the processor 1110.
  • the embodiments of the present application further provide a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or instruction is executed by a processor, each process of the above image processing method embodiment can be achieved, and can achieve the same The technical effect, in order to avoid repetition, will not be repeated here.
  • a readable storage medium includes a computer-readable storage medium, such as a computer read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like.
  • ROM computer read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk and the like.
  • An embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above image processing method embodiments, and The same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
  • the chip mentioned in the embodiments of the present application may also be referred to as a system-on-chip, a system-on-chip, a system-on-a-chip, or a system-on-a-chip, or the like.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to enable a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods of the various embodiments of the present application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Power Engineering (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种图像处理方法、装置及电子设备,属于通信技术领域。该方法包括:获取目标图像,该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,第一设备具有第一姿态;识别第一图形的图形主体和第一图形的图形辅体,该第一图形为该目标图像中的一个图形;在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,该第一预设区间与该第一姿态相对应;基于该第一字符识别第一设备。

Description

图像处理方法、装置及电子设备
相关申请的交叉引用
本申请主张在2020年09月10日在中国提交的中国专利申请号No.202010948810.5的优先权,其全部内容通过引用包含于此。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种图像处理方法、装置及电子设备。
背景技术
随着通信技术的快速发展,多个设备之间可以建立无线连接,例如,智能手环和智能手表等智能穿戴设备与手机之间可以建立无线连接并传输数据。
通常,可以通过媒体访问控制(medium access control,MAC)地址等身份信息对设备进行标识。当用户想要使两个设备之间建立无线连接时,用户可以触发设备1生成并显示身份图像,并使用设备2正面扫描该身份图像,从而设备2可以从该身份图像中读取设备1的身份信息。在设备1与设备2建立无线连接之后,设备1与设备2之间可以传输各种数据。
然而,在使用设备2正面扫描由设备1显示的身份图像的过程中,由于设备1可能会发生晃动,例如智能手表会随着用户手臂移动发生晃动,因此设备2采集到的身份图像的角度会发生变化,从而导致无法识别该身份图像,进而无法读取设备的身份信息。
发明内容
本申请实施例的目的是提供一种图像处理方法、装置及电子设备,能够解决被扫描设备发生晃动导致扫描设备无法识别被扫描设备的身份的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种图像处理方法。该方法包括:获取目标图像, 该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,第一设备具有第一姿态;识别第一图形的图形主体和第一图形的图形辅体,该第一图形为该目标图像中的一个图形;在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,该第一预设区间与该第一姿态相对应;基于该第一字符识别第一设备。
第二方面,本申请实施例提供了一种图像处理装置。该图像处理装置包括获取模块、识别模块和确定模块。获取模块,用于获取目标图像,该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,第一设备具有第一姿态。识别模块,用于识别第一图形的图形主体和第一图形的图形辅体,该第一图形为获取模块获取的该目标图像中的一个图形。确定模块,用于在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,该第一预设区间与该第一姿态相对应。识别模块,还用于基于确定模块确定的该第一字符识别第一设备。
第三方面,本申请实施例提供了一种电子设备,该电子设备包括处理器、存储器及存储在该存储器上并可在该处理器上运行的程序或指令,该程序或指令被该处理器执行时实现如第一方面提供的方法的步骤。
第四方面,本申请实施例提供了一种可读存储介质,该可读存储介质上存储程序或指令,该程序或指令被处理器执行时实现如第一方面提供的方法的步骤。
第五方面,本申请实施例提供了一种芯片,该芯片包括处理器和通信接口,该通信接口和该处理器耦合,该处理器用于运行程序或指令,实现如第一方面提供的方法。
在本申请实施例中,可以获取目标图像,该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,第一设备具有第一姿态;识别第一图形的图形主体和第一图形的图形辅体,该第一图形为该目标图像中的一个图形;在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,该第一预设区间与该第一姿态相对应;基于该第一字符识别第一设备。通过该方案,在第一设备显 示动态图像的情况下,尽管通过第二设备对该动态图像进行拍摄而得到的目标图像的角度会随着第一设备的姿态的变化而发生变化,但是,当目标图像中的第一图形的图形辅体在图形主体上的位置属于与第一姿态对应的第一预设区间内时,仍然可以确定与该第一图形对应的第一字符,并基于该第一字符识别第一设备,因此,本申请实施例可以在被扫描设备发生晃动的情况下识别被扫描设备的身份。
附图说明
图1为本申请实施例提供的一种图像生成方法的示意图;
图2为本申请实施例提供的一种字符与图形的示意图;
图3为本申请实施例提供的一种动态图像的示意图;
图4为本申请实施例提供的一种图像处理方法的示意图之一;
图5为本申请实施例提供的扫描动态图像的示意图之一;
图6为本申请实施例提供的扫描动态图像的示意图之二;
图7为本申请实施例提供的根据目标图像确定字符的示意图;
图8为本申请实施例提供的一种图像处理方法的示意图之二;
图9为本申请实施例提供的图像处理装置的结构示意图;
图10为本申请实施例提供的电子设备的硬件示意图之一;
图11为本申请实施例提供的电子设备的硬件示意图之二。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个, 也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。
本申请实施例提供一种图像生成方法、图像处理方法、图像处理装置及电子设备,可以获取目标图像,该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,第一设备具有第一姿态;识别第一图形的图形主体和第一图形的图形辅体,该第一图形为该目标图像中的一个图形;在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,该第一预设区间与该第一姿态相对应;基于该第一字符识别第一设备。通过该方案,在第一设备显示动态图像的情况下,尽管通过第二设备对该动态图像进行拍摄而得到的目标图像的角度会随着第一设备的姿态的变化而发生变化,但是,当目标图像中的第一图形的图形辅体在图形主体上的位置属于与第一姿态对应的第一预设区间内时,仍然可以确定与该第一图形对应的第一字符,并基于该第一字符识别第一设备,因此,本申请实施例可以在被扫描设备发生晃动的情况下识别被扫描设备的身份。
下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的图像生成方法、图像处理方法、图像处理装置及电子设备进行详细地说明。
实施例一
如图1所示,本申请实施例提供一种图像生成方法。该方法可以应用于第一设备,也可以称为被扫描设备,例如智能穿戴设备。该方法可以包括下述的S101至S103。
S101、第一设备获取第一设备的配置信息,该配置信息包括K个字符。
其中,K为正整数。第一设备的配置信息可以用于指示第一设备的身份。
可选的,上述配置信息可以为静态配置信息,例如,静态配置信息可以为第一设备出厂配置的或为第一设备的客户识别模块(subscriber identity module,SIM)卡预配置的。或者,上述配置信息可以为动态配置信息,例如,在每个第一设备发生注册(register)、附着(attach)、多模重选(inter-RAT cell reselection)、多模切换(inter-RAT handover)、注册更新(registration update)时,网络设备可以通过下行消息(例如系统消息)为第一设备分配配置信息。
可选的,第一设备的配置信息可以包括以下至少一项:媒体访问控制(media access  control,MAC)地址;国际移动设备识别码(international mobile equipment identity,IMEI);集成电路卡识别码(integrate circuit card identity,ICCID);移动设备识别码(mobile equipment identifier,MEID);国际移动用户识别码(international mobile subscriber identification number,IMSI);5G-S-临时移动签约标识(5G S-temporary mobile subscription identifier,5G-S-TMSI);全非激活态无线网络临时标识(full inactive-radio network temporary identifier,FullI-RNTI)。当然,第一设备的配置信息还可以为其他任意用于指示第一设备的身份的信息,本申请实施例不作限定。
可选的,第一设备的配置信息可以存储在第一设备中,或者存储在第一设备驻留网络的网络设备中,或者存储在云端服务器中,本申请实施例不作限定。
可选的,上述K个字符可以包括T个字符串,T个字符串中的每个字符串可以包括至少一个字符,即T个字符串的所有字符可以组合为该K个字符。其中,T个字符串可以是二进制、四进制、六进制、八进制、十进制、十六进制、三十二进制、六十四进制或其他可能进制等。T为正整数。
示例性的,假设第一设备的MAC地址为十六进制的A8-9C-ED-7D-CC-9E,其中,前3个字符串A8-9C-ED可以用于指示第一设备的公司,后3个字符串7D-CC-9E可以用于指示第一设备的型号类型。
本申请实施例中,可以在下述任意一种场景下获取第一设备的配置信息:
场景1、第一设备接收用户的第一输入,并响应于该第一输入,获取第一设备的配置信息。场景2、第一设备在满足预设条件时,自动获取第一设备的配置信息,该预设条件可以为第一设备发生注册、附着、多模重选、多模切换或注册更新等。
S102、对于该K个字符中的每个字符,第一设备根据一个字符对应的主元素和一个字符对应的辅元素,生成与该一个字符对应的一个图形。
当K个字符包括T个字符串时,对于T个字符串中的每个字符串中的每个字符,第一设备可以根据一个字符对应的主元素和一个字符对应的辅元素,生成与一个字符对应的图形,得到与每个字符串对应的一个图形组。
本申请实施例中,K个字符中的每个字符可以对应一个主元素和一个辅元素。其中,一个字符对应的主元素可以用于定义一个图形的主体构成,即一个图形的图形主体,包括 图形主体的形状和大小;一个字符对应的辅元素可以区分不同值,即一个图形的图形辅体,包括图形辅体的数量和图形辅体在图形主体上的位置。通常,在字符串中各个字符的主元素通常不会发生变化;而在字符串中各个字符的辅元素会循环变化,但元素单一,例如辅元素可以通过位置变化、数量值变化定义不同数值。
可选的,对于K个字符中的任意一个字符具体可以按照下述方式获取:
(a)第一设备根据一个字符对应的主元素,得到一个图形的图形主体。
(b)第一设备根据一个字符对应的辅元素,得到一个图形的图形辅体。
(c)第一设备根据该一个图形的图形主体和该一个图形的图形辅体,生成与一个字符对应的图形。
为了更清楚地示意本申请,下面提供一种字符与图形的示意图。如图2所示,假设字符对应的主元素是弧线,字符对应的辅元素是圆点的位置和数量。圆点的位置不同、圆点的数量不同可以用于表示不同数值。对于字符0-字符9,各个字符对应图形的主元素为弧线,各个字符对应图形的辅元素为圆点的位置和圆点的数量。具体的:
字符0对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符0对应的一根弧线;字符0对应的辅元素用于指示位于弧线的第一端点的一个圆点,即图形辅体为该弧线的第一端点的一个圆点。
字符1对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符1对应的一根弧线;字符1对应的辅元素用于指示距离弧线的第一端点1/3弧线长度的一个圆点,即图形辅体为距离弧线的第一端点1/3弧线长度的一个圆点。
字符2对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符2对应的一根弧线;字符2对应的辅元素用于指示距离弧线的第一端点1/2弧线长度的一个圆点,即图形辅体为距离弧线的第一端点1/2弧线长度的一个圆点。
字符3对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符3对应的一根弧线;字符3对应的辅元素用于指示位于弧线的第二端点的一个圆点,即图形辅体为位于弧线的第二端点的一个圆点。
字符4对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符4对应的一根弧线;字符4对应的辅元素用于指示位于弧线的第二端点的两个圆点,即图形辅体为 位于弧线的第二端点的两个圆点。
字符5对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符5对应的一根弧线;字符5对应的辅元素用于指示位于弧线的第一端点的一个圆点和位于弧线的第二端点的两个圆点,即图形辅体为位于弧线的第一端点的一个圆点和位于弧线的第二端点的两个圆点。
字符6对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符6对应的一根弧线;字符6对应的辅元素为距离弧线的第一端点1/3弧线长度的一个圆点和位于弧线的第二端点的两个圆点,即图形辅体为距离弧线的第一端点1/3弧线长度的一个圆点和位于弧线的第二端点的两个圆点。
字符7对应的主元素用于指示一根弧线,即图形主体为如图7所示的与字符7对应的一根弧线;字符7对应的辅元素用于指示距离弧线的第一端点1/2弧线长度的一个圆点和位于弧线的第二端点的两个圆点,即图形辅体为距离弧线的第一端点1/2弧线长度的一个圆点和位于弧线的第二端点的两个圆点。
字符8对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符8对应的一根弧线;字符8对应的辅元素用于指示位于弧线的第二端点的三个圆点,即图形辅体为位于弧线的第二端点的三个圆点。
字符9对应的主元素用于指示一根弧线,即图形主体为如图2所示的与字符9对应的一根弧线;字符9对应的辅元素用于指示位于弧线的第一端点的一个圆点和位于弧线的第二端点的三个圆点,即图形辅体为位于弧线的第一端点的一个圆点和位于弧线的第二端点的三个圆点。
需要说明的是,上述实施例均为示例性说明,其并不对本申请实施例形成限定。可以理解的是,上述实施例是以字符0-字符9为例进行示例性说明的,还可以为其他任意可能的字符。另外,一个图形的图形主体也可以为上述弧线外的其他图形,一个图像的图形辅体也可以为上述圆点外的其他图形。
S103、第一设备将该K个字符对应的K个图形排布至不同的区域,生成动态图像。
其中,上述动态图像可以用于指示第一设备的配置信息。
需要说明的是,与相关技术中被扫描设备显示的图像为静态图像(例如二维码)有所 不同,本申请实施例提供的第一设备生成并显示的图像为动态图像,即,第一设备能够以预设的频率切换显示多张动态图像而产生某种动态效果,该多张动态图像中的每张动态图像均可以用于指示第一设备的配置信息,如此,可以提高扫描方式的多样性。
可选的,当K个字符包括T个字符串时,T个字符串对应T个图形组,且T个图形组分别对应一个排布区域,且每个图形组对应的排布区域、每个图形组中每个图形的排布位置可以为根据预设排布规则确定的。因此,在第一设备获取T个图形组之后,可以按照预设排布规则,将T个图形组中的每个图形组分别排布一个图层中与每个图形组对应的排布区域,最终得到一张动态图像。
可选的,本申请实施例提供的图像处理方法还可以包括:
(1)第一设备将K个初始字符从第一格式转换为第二格式,得到K个目标字符,该第二格式为预设格式。
(2)对于K个目标字符的每个目标字符,根据一个目标字符对应的主元素和一个目标字符对应的辅元素,生成与一个目标字符对应的图形。
(3)第一设备将K个目标字符对应的K个图形排布至不同的区域,生成动态图像。
可选的,K个图形可以排布至多个同心圆所在的区域中,每个圆环可以包括一个方向标识(例如箭头)和至少一个图形。每个圆环中的M个弧线可以全部显示或部分显示;或者,每个圆环中的M个弧线的部分弧线为第一显示方式,每个圆环中的M个弧线的其他部分弧线为第二显示方式。其中,第一显示方式和第二显示方式可以为弧线的颜色不同,或为弧线的线型不同,或为弧线的线条粗细不同。
示例性的,图3为本申请实施例提供的一种动态图像的示意图。如图3所示,在第一设备将6个初始字符串:A8-9C-ED-7D-CC-9E转换成为6个目标字符串:168-156-237-125-204-158之后,若第一设备按照上述实施例获取6个图形组,则第一设备可以先将字符串168和字符串156分别排布至第一个内环,字符1、字符8和字符5的弧线用实线表示,字符6、字符1和字符6的弧线用实线表示。然后,第一设备可以将字符串237和字符串125分别排布至第二个内环,字符2、字符7和字符2的弧线用实线表示,字符3、字符1和字符5的弧线用实线表示。再然后,第一设备可以将字符串204和字符串158分别排布至第三个外环,字符2、字符4和字符5的弧线用虚线表示,字符0、字 符1和字符8的弧线用实线表示。在将这些字符串分别排布至3个圆环的不同位置之后,可以得到如图3所示的动态图像。最后,第一设备可以按照预设频率切换显示多张动态图像,与第i张图像相比,第i+1张图像的多个同心圆均旋转预设角度,即动态切换显示多张图像能够产生多个同心圆按照预设角速度旋转的动态效果。
本申请实施例提供一种图像生成方法,由于第一设备可以根据K个字符中每个字符对应的主元素和辅元素,分别生成与每个字符对应的图形,从而生成一张动态图像,进而生成多张动态图像,因此可以丰富图像的生成方式。
实施例二
在第一设备显示动态图像的过程中,用户可以将第二设备的摄像头对准该动态图像,以对该动态图像进行拍摄。通常情况下,如果第二设备的摄像头正面对准摄像头,且第一设备的姿态没有发生变化,那么第二设备可以获取与动态图像内容一致的正面图像,并识别该正面图像中的第一图形的图形主体和第一图形的图形辅体,以及直接根据该第一图形的图形主体和第一图形的图形辅体确定与第一图形对应的第一字符,之后基于该第一字符识别第一设备。但是,在某些情况下,第一设备可以会发生轻微晃动。例如,智能手表会随着用户手臂左右旋转发生晃动,或者,智能手表会随着用户手臂上下移动发生晃动。因此,第二设备采集到的身份图像的角度会发生变化,从而导致无法识别该身份图像,进而无法读取设备的身份信息。
为了解决该问题,本申请实施例提供一种图像处理方法。如图4所示,该方法可以包括下述的S401至S404。下面以执行主体为图像处理装置为例对该方法进行示例性说明。可选的,本申请实施例提供的图像处理装置可以是除第一设备外的第二设备,也可以是除第一设备和第二设备外的第三设备。
S401、图像处理装置获取目标图像。
其中,上述目标图像可以为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像。该动态图像可以用于指示第一设备的配置信息。
本申请实施例中,当通过第二设备对第一设备显示的动态图像进行拍摄时,第一设备具有第一姿态。需要说明的是,第一设备具有第一姿态是指:第一设备相对于第二设备所具备的姿态为第一姿态。例如,当第二设备的摄像头的光轴方向与第一设备显示的动态图 像所在的平面垂直时,第二设备的摄像头正面对准动态图像,此时第一设备具有一个姿态;当第一设备相对于第二设备发生晃动时,第一设备具有的姿态随之发生变化,第二设备的摄像头的光轴方向与第一设备显示的动态图像所在的平面之间的夹角减少,此时即第一设备具有另一个姿态。随着第二设备的摄像头的光轴方向与第一设备显示的动态图像所在的平面之间的夹角变化,可以认为第一设备具备不同姿态。对于确定第一设备的姿态的方式可以参照下述实施例中的描述,此处不予赘述。
对于第一设备和动态图像的描述可以参照上述实施例一种的相关描述。
上述S401具体可以通过下述的两种方式实现。
第一种方式,在图像处理装置是第二设备的情况下,第二设备通过第二设备对第一设备显示的动态图像进行拍摄,得到目标图像。
例如,第一设备是智能手表,第二设备是手机。在智能手表的显示屏显示动态图像的情况下,用户可以将手机的后置摄像头对准智能手表的显示屏,此时若智能手表发生晃动,则手机可以采集到图形发生形变的目标图像。
第二种方式,在图像处理装置是第三设备的情况下,在通过第二设备对第一设备显示的动态图像进行拍摄,得到目标图像之后,第三设备接收第二设备发送的目标图像。
例如,第一设备是智能手表,第二设备是手机。在智能手表的显示屏显示动态图像的情况下,用户可以将手机的后置摄像头对准智能手表的显示屏,此时若智能手表发生晃动,则手机可以采集到图形发生形变的目标图像。之后,手机可以将该目标图像发送至服务器,从而服务器可以接收该目标图像,并对目标图像进行识别。
S402、图像处理装置识别第一图形的图形主体和第一图形的图形辅体。
其中,第一图形为目标图像中的一个图形。
对于图形主体和图形辅体的描述可以参照上述实施例一种的相关描述。
可选的,目标图像包括至少一个图像区域,每个图像区域包括一个方向标识和至少一个图形,每个图形包括一个图形主体和至少一个图形辅体。其中,每个图形的图形主体包括图形主体的形状和大小;每个图形的图形辅体包括图形辅体的数量和图形辅体在图形主体上的位置。
本申请实施例中,在图像处理装置中可以存储有预设识别算法。在图像处理装置获取 目标图像之后,图像处理装置可以按照该预设识别算法,对目标图像的至少一个图像区域进行分步识别。具体的,当目标图像包括多个图像区域时,图像处理装置可以按照多个图像区域的预设识别顺序,依次对多个图像区域中的图像区域进行识别,例如,第一图像区域可以为多个图像区域中的第一个待识别的图像区域。
S403、在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,图像处理装置确定与该第一图形对应的第一字符。
其中,第一预设区间与第一姿态相对应。
通常情况下,如果第二设备的摄像头正面对准摄像头,且第一设备的姿态没有发生变化,那么在图像处理装置识别第一图形的图形主体和第一图形的图形辅体之后,可以先根据第一图形的图形主体得到第一图形的主元素,再根据第一图形的图形辅体得到第一图形的辅元素,之后根据第一图形的主元素和第一图形的辅元素,确定与第一图形对应的字符。
但是,在第一设备发生轻微晃动的情况下,与原始显示的动态图像相比,采集的目标图像中的图形辅体和图形主体的形状产生轻微变化、图形辅体和图形主体的相对位置关系会发生变化。以图2所示的字符2为例,在第一设备发生轻微晃动之后,弧线的形状可能发生变化,圆点可能从弧线的中心位置偏移到弧线的其他位置,因此图像处理装置无法按照上述方法确定与第一图形对应的字符。本申请实施例设置了多种姿态下用于描述图形辅体和图形主体的相对位置关系的多个预设区间,每种姿态对应的预设区间有所不同,因此,在图形辅体和图形主体的形状产生轻微变化、图形辅体和图形主体的相对位置关系产生轻微变化的情况下,如果第一图形的图形辅体在该第一图形的图形主体上的位置属于第一姿态对应的第一预设区间,那么仍可以根据发生形变的第一图形的图形主体和图形辅体,判定出与第一图形对应的第一字符。
可选的,在第一设备处于一种姿态的情况下,针对不同字符设置有不同的预设区间。在第一设备处于不同姿态的情况下,针对相同字符设置有不同的预设区间。
例如,针对字符2,当第一设备轻微晃动时具有第一姿态,与第一姿态对应的第一预设区间为42%至58%;当第一设备剧烈晃动时具有第二姿态,第二姿态对应的第二预设区间为40%至60%。
可以理解的是,随着第二设备的摄像头的光轴方向与第一设备显示的动态图像所在的 平面之间的夹角的逐渐减小,第一设备相对于第二设备发生的倾斜越明显,第二设备对第一设备显示的动态图像进行拍摄而得到图像的图像内容形变程度越明显,因此为了更准确地识别到图像内容,与第一设备具备的姿态相对应的区间范围也要发生变化。
可选的,上述S403可以通过下述S1至S3实现。
S1、在第一设备相对于第一方向的旋转角度小于或等于预设角度的情况下,图像处理装置分别获取第一图形的图形主体和第一图形的图形辅体在第二方向上的投影。
其中,第二方向与第一方向垂直。
在经过多次实验后发现,当第一设备相对于第一方向的旋转角度大于预设角度时,由于第一设备的倾斜角度较大,此时第二设备采集到的图像的形变量较大,超出了可识别范围,此时无法获取图形主体和图形辅体分别在第二方向上的投影;而当第一设备相对于第一方向的旋转角度小于或等于预设角度时,由于第一设备的倾斜角度较小,此时第二设备采集到的图像的形变量较小,处于可以识别范围内,因此可以分别获取第一图形的图形主体和第一图形的图形辅体在第二方向上的投影。
可选的,上述预设角度可以为15°。当然,可以理解的是,当采用的算法不同时,对采集图像的识别精度不同,因此预设角度还可以为其他可能的角度,本申请实施例不作限定。
例如,图5中的(a)为通过扫描设备采集智能手表显示的动态图像的侧视图,图5中的(b)为用户佩戴智能手表的正视图。当用户沿着图5所示的X轴的正方向向右旋转手臂,或者沿着图5所示的X轴的负方向向左旋转手臂时,如果智能手表相对于X轴方向的旋转角度小于或等于15°,那么图像处理装置分别获取第一图形的图形主体和第一图形的图形辅体在Y轴方向上的投影。其中,X轴方向与Y轴方向垂直。
再例如,图6中的(a)为通过扫描设备采集智能手表显示的动态图像的侧视图,图6中的(b)为用户佩戴智能手表的正视图。当用户沿着图6所示的Y轴的正方向向上旋转手臂,或者沿着图6所示的Y轴的负方向向下旋转手臂时,如果智能手表相对于Y轴方向的旋转角度小于或等于15°,那么图像处理装置分别获取第一图形的图形主体和第一图形的图形辅体在X轴方向上的投影。其中,X轴方向与Y轴方向垂直。
S2、图像处理装置根据第一图形的图形主体和第一图形的图形辅体在第二方向上的投 影,确定第一比值。
示例性的,结合上述图5所示的旋转方式继续进行说明。如图7中的(a)所示,在第一设备未发生姿态变化的情况下,第二设备对动态图像进行采集得到的正面图像,该正面图像中的第一图形包括:一个弧形的图形主体、位于图形主体的第一端点A1和第二端点B1中间的一个圆点,因此与第一图形对应的字符为2。当用户沿着图5所示的X轴的正方向向右旋转手臂时,如果智能手表相对于X轴方向的旋转角度小于或等于15°,那么如图7中的(b)所示,为在第一设备发生姿态变化的情况下,由扫描设备采集到的目标图像,该目标图像为正面图像发生倾斜时发生透视得到的图像。
S3、在第一比值属于第一预设区间的情况下,图像处理装置确定对第一图形进行图像校正后的目标图形,并根据预设的图形与字符对应规则,确定与该目标图形对应的第一字符。
可选的,上述预设的图形与字符对应规则可以为如图2所示的对应规则。
可选的,目标图像可以包括至少一个图像区域,每个图像区域可以包括一个方向标识和至少一个图形,每个图形可以包括一个图形主体和至少一个图形辅体。示例性的,结合图4,如图8所示,在上述S402之前,本申请实施例提供的图像处理方法还可以包括下述的S4,且上述S2可以通过下述的S2a至S2c实现。
S4、图像处理装置根据至少一个图像区域中的第一图像区域的方向标识,确定该第一图像区域中的第一图形为待识别图形。
S2a、图像处理装置根据第一图像区域的方向标识,确定第一图形的第一端点和第一图形的第二端点。
S2b、图像处理装置确定第一长度和第二长度。其中,该第一长度为该第一端点和该第一图形的图形辅体分别在第二方向上的投影之间的长度,该第二长度为第一端点和第二端点分别在第二方向上的投影之间的长度。
S2c、图像处理装置将该第一长度与该第二长度的比值,确定为该第一比值。
示例性的,仍以上述图7为例进行示例性说明。图7中的(c)为对如图7中的(b)所示的第一图形的放大图。如图7中的(c)所示,图形主体的第一端点A1在Y轴方向上的投影为A2,图形主体的第二端点B1在Y轴方向上的投影为B2。在第一设备发生姿态 变化的情况下,由于采集到的第一图形产生了一定量的图像畸变,因此圆点会相对于第一端点A1和第二端点A2中点发生轻微的移动。本申请实施例为字符2设置了一个预设区间,即42%至58%。当圆点在Y轴方向上的投影长度(即从圆点投影到点A2的长度)与第一端点A1和第二端点B1分别在第二方向上的投影之间的长度(即从点A2到点B2的长度)的比值属于42%至58%(即圆点位于C2和D2之间的弧线段)这一区间时,可以确定对7中的(c)所示的第一图形进行图像校正后的目标图形为如图2所示的与字符2对应的图形,进而确定字符2。
需要说明的是,上述实施例是以对字符2设置第一预设区间为例进行示例性说明的。可以理解,对于不同的字符均可以根据实际使用需求设置不同的预设区间。例如,对于如图2所示的字符1,可以设置第二预设区间为25%至41%,此时,针对一个图形包括一个弧线和一个圆点这种情况,如果投影比值属于第一预设区间,那么确定字符2,而如果投影比值属于第二预设区间,那么确定字符1。由于在第一设备具有一种姿态的情况下,不同的字符设置有不同的预设区间,因此可以实现对不同的字符的区分。
S404、图像处理装置基于该第一字符识别第一设备。
可选的,在S403之后,在S404之前,本申请实施例提供的图像处理方法还可以包括:将第一字符从第二格式转换为第一格式,得到第三字符。相应的,上述S404可以包括:第一设备基于第三字符识别第一设备。
本申请实施例提供一种图像处理方法,在第一设备显示动态图像的情况下,尽管通过第二设备对该动态图像进行拍摄而得到的目标图像的角度会随着第一设备的姿态的变化而发生变化,但是,当目标图像中的第一图形的图形辅体在图形主体上的位置属于与第一姿态对应的第一预设区间内时,仍然可以确定与该第一图形对应的第一字符,并基于该第一字符识别第一设备,因此,该方法可以在被扫描设备发生晃动的情况下识别被扫描设备的身份。
可选的,在上述S404之后,第一种可能的情况为,基于第一字符识别第一设备成功,因此可以停止识别目标图像中的其他图形,并使第一设备与第二设备建立无线连接。第二种可能的情况为,基于第一字符识别第一设备失败,这时需要继续识别目标图像中的其他图形。针对第二种可能的情况,本申请实施例提供的图像处理方法还可以下述的S405至 S407。
S405、在基于第一字符识别第一设备失败的情况下,图像处理装置识别第二图形的图形主体和第二图形的图形辅体。
其中,第二图形为目标图像中的除第一图形之外的图形,即第二图形与第一图形不同。
可选的,上述“识别第二图形的图形主体和第二图形的图形辅体”具体可以包括:识别至少一个图像区域中的第二图像区域中的第二图形的图形主体和第二图形的图形辅体。
S406、在该第二图形的图形辅体在该第二图形的图形主体上的位置属于第二预设区间的情况下,图像处理装置确定与该第二图形对应的第二字符。
其中,第二预设区间与第一姿态相对应。
S407、图像处理装置基于第一字符和第二字符识别第一设备。
对于上述S405至S407的描述,可以参照上述实施例中的S402至S404的相关描述,此处不再赘述。
假设第一设备的Mac地址是十六进制的A8-9C-ED-7D-CC-9E,字符串A8-9C-ED为公司识别,字符串7D-CC-9E为设备识别。在基于第一字符串168和第二字符串156识别第一设备失败的情况下,图像处理装置可以继续识别至少一个图像区域中的第三图像区域,例如与4点至6点对应区域,得到第三图形组。然后,图像处理装置确定与第三图形组中的每个图形对应的字符,从而得到字符串237。之后,图像处理装置基于字符串168-156-237识别第一设备,或基于进制转化后的字符串A8-9C-ED识别第一设备。如果基于字符串168-156-237或字符串A8-9C-ED识别出第一设备的公司识别,那么图像处理装置可以在屏幕中显示第一设备的公司信息,即对第一设备的公司识别成功。如果用户想要了解第一设备的具体情况,可以触发图像处理装置继续对目标图像进行识别;否则,停止图像识别。
本发明实施例提供的图像处理方法,图像处理装置可以对第一设备显示的动态图像中的图形进行分步识别,因此在识别得到的信息不满足需求的情况下,可以继续下一步识别图形,而在识别到的信息满足需求的情况下,可以停止图形识别,从而这种识别方式更为灵活,并且更为节能。
需要说明的是,本申请实施例提供的图像处理方法,执行主体可以为图像处理装置,或者该图像处理装置中的用于执行图像处理方法的控制模块。本申请实施例中以图像处理 装置执行图像处理方法为例,说明本申请实施例提供的图像处理装置。
如图9所示,本申请实施例提供一种图像处理装置900。该图像处理装置包括获取模块901、识别模块902和确定模块903。
获取模块901,可以用于获取目标图像,该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,第一设备具有第一姿态。识别模块902,可以用于识别第一图形的图形主体和第一图形的图形辅体,该第一图形为获取模块901获取的该目标图像中的一个图形。确定模块903,可以用于在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,该第一预设区间与该第一姿态相对应。识别模块902,还可以用于基于确定模块903确定的第一字符识别第一设备。
可选的,识别模块902,还可以用于在基于第一字符识别第一设备失败的情况下,识别第二图形的图形主体和第二图形的图形辅体,该第二图形为目标图像中的另一个图形。确定模块903,还可以用于在该第二图形的图形辅体在该第二图形的图形主体上的位置属于第二预设区间的情况下,确定与该第二图形对应的第二字符,该第二预设区间与第一姿态相对应。识别模块902,还可以用于基于第一字符和第二字符识别第一设备。
可选的,确定模块903,具体可以用于:在第一设备相对于第一方向的旋转角度小于或等于预设角度的情况下,分别获取第一图形的图形主体和第一图形的图形辅体在第二方向上的投影,该第二方向与该第一方向垂直;并根据该第一图形的图形主体和该第一图形的图形辅体在该第二方向上的投影,确定第一比值;以及在该第一比值属于第一预设区间的情况下,确定对该第一图形进行图像校正后的目标图形,并根据预设的图形与字符对应规则,确定与该目标图形对应的第一字符。
可选的,目标图像包括至少一个图像区域,每个图像区域包括一个方向标识和至少一个图形,每个图形包括一个图形主体和至少一个图形辅体。确定模块903,还可以用于在识别第一图形的图形主体和第一图形的图形辅体之前,根据至少一个图像区域中的第一图像区域的方向标识,确定该第一图像区域中的第一图形为待识别图形。
可选的,确定模块903,具体可以用于根据第一图像区域的方向标识,确定第一图形的第一端点和第一图形的第二端点;并确定第一长度和第二长度;以及将第一长度与第二 长度的比值,确定为第一比值。其中,该第一长度为第一端点和第一图形的图形辅体分别在第二方向上的投影之间的长度,该第二长度为第一端点和第二端点分别在第二方向上的投影之间的长度。
可选的,获取模块901,具体可以用于对动态图像进行拍摄,得到目标图像。或者,获取模块901,具体可以用于在通过第二设备对动态图像进行拍摄,得到目标图像之后,接收第二设备发送的目标图像。
本申请实施例提供一种图像处理装置,在第一设备显示动态图像的情况下,尽管通过第二设备对该动态图像进行拍摄而得到的目标图像的角度会随着第一设备的姿态的变化而发生变化,但是,当目标图像中的第一图形的图形辅体在图形主体上的位置属于与第一姿态对应的第一预设区间内时,该装置仍然可以确定与该第一图形对应的第一字符,并基于该第一字符识别第一设备,因此,该装置可以在被扫描设备发生晃动的情况下识别被扫描设备的身份。
本申请实施例中的图像处理装置可以是装置,也可以是终端中的部件、集成电路、或芯片。该装置可以是移动电子设备,也可以为非移动电子设备。示例性的,移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等,非移动电子设备可以为服务器、网络附属存储器(network attached storage,NAS)、个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等,本申请实施例不作具体限定。
本申请实施例中的图像处理装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。
本申请实施例提供的图像处理装置能够实现图4至图8的方法实施例实现的各个过程,为避免重复,这里不再赘述。
可选的,如图10所示,本申请实施例还提供一种电子设备1000,包括处理器1001,存储器1002,存储在存储器1002上并可在处理器1001上运行的程序或指令,该程序或指令被处理器1001执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术 效果,为避免重复,这里不再赘述。
需要说明的是,本申请实施例中的电子设备包括上述的移动电子设备和非移动电子设备。
图11为实现本申请实施例的一种电子设备的硬件结构示意图。
该电子设备1100包括但不限于:射频单元1101、网络模块1102、音频输出单元1103、输入单元1104、传感器1105、显示单元1106、用户输入单元1107、接口单元1108、存储器1109、以及处理器1110等部件。
本领域技术人员可以理解,电子设备1100还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图11中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。
处理器1110,可以用于获取目标图像,该目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,该动态图像用于指示第一设备的配置信息,该第一设备具有第一姿态。处理器1110,还可以用于识别第一图形的图形主体和第一图形的图形辅体,并在该第一图形的图形辅体在该第一图形的图形主体上的位置属于第一预设区间的情况下,确定与该第一图形对应的第一字符,以及基于该第一字符识别第一设备,其中,该第一预设区间与该第一姿态相对应,该第一图形为该目标图像中的一个图形。
可选的,处理器1110,还可以用于在基于第一字符识别第一设备失败的情况下,识别第二图形的图形主体和第二图形的图形辅体,并在该第二图形的图形辅体在该第二图形的图形主体上的位置属于第二预设区间的情况下,确定与该第二图形对应的第二字符,以及基于第一字符和第二字符识别第一设备,其中,该第二图形为目标图像中的另一个图形。其中,该第二预设区间与第一姿态相对应。
可选的,处理器1110,具体可以用于:在第一设备相对于第一方向的旋转角度小于或等于预设角度的情况下,分别获取第一图形的图形主体和第一图形的图形辅体在第二方向上的投影,该第二方向与该第一方向垂直;并根据该第一图形的图形主体和该第一图形的图形辅体在该第二方向上的投影,确定第一比值;以及在该第一比值属于第一预设区间的 情况下,确定对该第一图形进行图像校正后的目标图形,并根据预设的图形与字符对应规则,确定与该目标图形对应的第一字符。
可选的,目标图像包括至少一个图像区域,每个图像区域包括一个方向标识和至少一个图形,每个图形包括一个图形主体和至少一个图形辅体。处理器1110,还可以用于在识别第一图形的图形主体和第一图形的图形辅体之前,根据至少一个图像区域中的第一图像区域的方向标识,确定该第一图像区域中的第一图形为待识别图形。
可选的,处理器1110,具体可以用于根据第一图像区域的方向标识,确定第一图形的第一端点和第一图形的第二端点;并确定第一长度和第二长度;以及将第一长度与第二长度的比值,确定为第一比值。其中,该第一长度为第一端点和第一图形的图形辅体分别在第二方向上的投影之间的长度,该第二长度为第一端点和第二端点分别在第二方向上的投影之间的长度。
可选的,处理器1110,具体可以用于通过输入单元1104对动态图像进行拍摄,得到目标图像。或者,处理器1110,具体可以用于在通过第二设备对动态图像进行拍摄,得到目标图像之后,通过射频单元1101接收第二设备发送的目标图像。
本申请实施例提供一种电子设备,在第一设备显示动态图像的情况下,尽管通过第二设备对该动态图像进行拍摄而得到的目标图像的角度会随着第一设备的姿态的变化而发生变化,但是,当目标图像中的第一图形的图形辅体在图形主体上的位置属于与第一姿态对应的第一预设区间内时,该电子设备仍然可以确定与该第一图形对应的第一字符,并基于该第一字符识别第一设备,因此,该电子设备可以在被扫描设备发生晃动的情况下识别被扫描设备的身份。
应理解的是,本申请实施例中,输入单元1104可以包括图形处理器(graphics processing unit,GPU)11041和麦克风11042,图形处理器11041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1106可包括显示面板11061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板11061。用户输入单元1107包括触控面板11071以及其他输入设备11072。触控面板11071,也称为触摸屏。触控面板11071可包括触摸检测装置和触摸控制器两个部分。其他输入设备11072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹 球、鼠标、操作杆,在此不再赘述。存储器1109可用于存储软件程序以及各种数据,包括但不限于应用程序和操作系统。处理器1110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1110中。
本申请实施例还提供一种可读存储介质,该可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
其中,处理器为上述实施例中的电子设备中的处理器。可读存储介质,包括计算机可读存储介质,如计算机只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等。
本申请实施例另提供了一种芯片,该芯片包括处理器和通信接口,该通信接口和该处理器耦合,该处理器用于运行程序或指令,实现上述图像处理方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡 献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种图像处理方法,所述方法包括:
    获取目标图像,所述目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,所述动态图像用于指示所述第一设备的配置信息,所述第一设备具有第一姿态;
    识别第一图形的图形主体和第一图形的图形辅体,所述第一图形为所述目标图像中的一个图形;
    在所述第一图形的图形辅体在所述第一图形的图形主体上的位置属于第一预设区间的情况下,确定与所述第一图形对应的第一字符,所述第一预设区间与所述第一姿态相对应;
    基于所述第一字符识别所述第一设备。
  2. 根据权利要求1所述的方法,其中,所述基于所述第一字符识别所述第一设备之后,所述方法还包括:
    在基于所述第一字符识别所述第一设备失败的情况下,识别第二图形的图形主体和第二图形的图形辅体,所述第二图形为所述目标图像中的除所述第一图形之外的图形;
    在所述第二图形的图形辅体在所述第二图形的图形主体上的位置属于第二预设区间的情况下,确定与所述第二图形对应的第二字符,所述第二预设区间与所述第一姿态相对应;
    基于所述第一字符和所述第二字符识别所述第一设备。
  3. 根据权利要求1所述的方法,其中,所述在所述第一图形的图形辅体在所述第一图形的图形主体上的位置属于第一预设区间的情况下,确定与所述第一图形对应的第一字符,包括:
    在所述第一设备相对于第一方向的旋转角度小于或等于预设角度的情况下,分别获取所述第一图形的图形主体和所述第一图形的图形辅体在第二方向上的投影,所述第二方向与所述第一方向垂直;
    根据所述第一图形的图形主体和所述第一图形的图形辅体在第二方向上的投影, 确定第一比值;
    在所述第一比值属于所述第一预设区间的情况下,确定对所述第一图形进行图像校正后的目标图形,并根据预设的图形与字符对应规则,确定与所述目标图形对应的所述第一字符。
  4. 根据权利要求3所述的方法,其中,所述目标图像包括至少一个图像区域,每个图像区域包括一个方向标识和至少一个图形,每个图形包括一个图形主体和至少一个图形辅体;所述识别第一图形的图形主体和第一图形的图形辅体之前,所述方法还包括:
    根据所述至少一个图像区域中的第一图像区域的方向标识,确定所述第一图像区域中的所述第一图形为待识别图形;
    所述根据所述第一图形的图形主体和所述第一图形的图形辅体在第二方向上的投影,确定第一比值,包括:
    根据所述第一图像区域的方向标识,确定所述第一图形的第一端点和所述第一图形的第二端点;
    确定第一长度和第二长度,所述第一长度为所述第一端点和所述第一图形的图形辅体分别在所述第二方向上的投影之间的长度,所述第二长度为所述第一端点和所述第二端点分别在所述第二方向上的投影之间的长度;
    将所述第一长度与所述第二长度的比值,确定为所述第一比值。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述获取目标图像,包括:
    通过所述第二设备对所述动态图像进行拍摄,得到所述目标图像;
    或者,
    在通过所述第二设备对所述动态图像进行拍摄,得到所述目标图像之后,接收所述第二设备发送的所述目标图像。
  6. 一种图像处理装置,所述图像处理装置包括获取模块、识别模块和确定模块;
    所述获取模块,用于获取目标图像,所述目标图像为通过第二设备对第一设备显示的动态图像进行拍摄得到的图像,所述动态图像用于指示所述第一设备的配置信息,所述第一设备具有第一姿态;
    所述识别模块,用于识别第一图形的图形主体和第一图形的图形辅体,所述第一图形为所述获取模块获取的所述目标图像中的一个图形;
    所述确定模块,用于在所述第一图形的图形辅体在所述第一图形的图形主体上的位置属于第一预设区间的情况下,确定与所述第一图形对应的第一字符,所述第一预设区间与所述第一姿态相对应;
    所述识别模块,还用于基于所述确定模块确定的所述第一字符识别所述第一设备。
  7. 根据权利要求6所述的图像处理装置,其中,
    所述识别模块,还用于在基于所述第一字符识别所述第一设备失败的情况下,识别第二图形的图形主体和第二图形的图形辅体,所述第二图形为所述目标图像中的除所述第一图形之外的图形;
    所述确定模块,还用于在所述第二图形的图形辅体在所述第二图形的图形主体上的位置属于第二预设区间的情况下,确定与所述第二图形对应的第二字符,所述第二预设区间与所述第一姿态相对应;
    所述识别模块,还用于基于所述第一字符和所述第二字符识别所述第一设备。
  8. 根据权利要求7所述的图像处理装置,其中,所述确定模块,具体用于:在所述第一设备相对于第一方向的旋转角度小于或等于预设角度的情况下,分别获取所述第一图形的图形主体和所述第一图形的图形辅体在第二方向上的投影,所述第二方向与所述第一方向垂直;并根据所述第一图形的图形主体和所述第一图形的图形辅体在第二方向上的投影,确定第一比值;以及在所述第一比值属于所述第一预设区间的情况下,确定对所述第一图形进行图像校正后的目标图形,并根据预设的图形与字符对应规则,确定与所述目标图形对应的所述第一字符。
  9. 根据权利要求8所述的图像处理装置,其中,所述目标图像包括至少一个图像区域,每个图像区域包括一个方向标识和至少一个图形,每个图形包括一个图形主体和至少一个图形辅体;
    所述确定模块,还用于在识别所述第一图形的图形主体和所述第一图形的图形辅体之前,根据所述至少一个图像区域中的第一图像区域的方向标识,确定所述第一图像区域中的所述第一图形为待识别图形;
    所述确定模块,具体用于根据所述第一图像区域的方向标识,确定所述第一图形的第一端点和所述第一图形的第二端点;并确定第一长度和第二长度;以及将所述第一长度与所述第二长度的比值,确定为所述第一比值;
    其中,所述第一长度为所述第一端点和所述第一图形的图形辅体分别在所述第二方向上的投影之间的长度,所述第二长度为所述第一端点和所述第二端点分别在所述第二方向上的投影之间的长度。
  10. 根据权利要求6所述的图像处理装置,其中,
    所述获取模块,具体用于对所述动态图像进行拍摄,得到所述目标图像;或者,所述获取模块,具体用于在通过所述第二设备对所述动态图像进行拍摄,得到所述目标图像之后,接收所述第二设备发送的所述目标图像。
  11. 一种电子设备,包括处理器,存储器及存储在所述存储器上并可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如权利要求1至5中任一项所述的图像处理方法的步骤。
  12. 一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如权利要求1至5中任一项所述的图像处理方法的步骤。
  13. 一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如权利要求1至5中任一项所述的方法。
  14. 一种计算机程序产品,所述程序产品被至少一个处理器执行以实现如权利要求1至5中任一项所述的方法。
  15. 一种图像处理装置,包括所述装置被配置成用于执行如权利要求1至5中任一项所述的图像处理方法。
PCT/CN2021/117243 2020-09-10 2021-09-08 图像处理方法、装置及电子设备 WO2022052956A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21866009.0A EP4202607A4 (en) 2020-09-10 2021-09-08 IMAGE PROCESSING METHOD AND DEVICE AND ELECTRONIC DEVICE
US18/119,816 US20230215200A1 (en) 2020-09-10 2023-03-09 Image processing method and apparatus and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010948810.5A CN112148124B (zh) 2020-09-10 2020-09-10 图像处理方法、装置及电子设备
CN202010948810.5 2020-09-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/119,816 Continuation US20230215200A1 (en) 2020-09-10 2023-03-09 Image processing method and apparatus and electronic device

Publications (1)

Publication Number Publication Date
WO2022052956A1 true WO2022052956A1 (zh) 2022-03-17

Family

ID=73889995

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117243 WO2022052956A1 (zh) 2020-09-10 2021-09-08 图像处理方法、装置及电子设备

Country Status (4)

Country Link
US (1) US20230215200A1 (zh)
EP (1) EP4202607A4 (zh)
CN (1) CN112148124B (zh)
WO (1) WO2022052956A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148124B (zh) * 2020-09-10 2024-07-26 维沃移动通信有限公司 图像处理方法、装置及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685206A (zh) * 2012-09-25 2014-03-26 阿里巴巴集团控股有限公司 识别信息的生成方法和系统
CN106231104A (zh) * 2016-08-03 2016-12-14 广东乐源数字技术有限公司 一种手环与智能手机绑定的方法
EP3493110A1 (en) * 2017-11-29 2019-06-05 Samsung Electronics Co., Ltd. Electronic device recognizing text in image
CN111598096A (zh) * 2020-04-03 2020-08-28 维沃移动通信有限公司 一种图像处理方法及电子设备
CN112148124A (zh) * 2020-09-10 2020-12-29 维沃移动通信有限公司 图像处理方法、装置及电子设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108111749B (zh) * 2017-12-06 2020-02-14 Oppo广东移动通信有限公司 图像处理方法和装置
JP7164243B2 (ja) * 2019-02-27 2022-11-01 日本電気株式会社 画像処理装置、画像処理方法、プログラム
CN110097019B (zh) * 2019-05-10 2023-01-10 腾讯科技(深圳)有限公司 字符识别方法、装置、计算机设备以及存储介质
CN110300286A (zh) * 2019-07-17 2019-10-01 维沃移动通信有限公司 一种图像显示方法及终端
CN111488874A (zh) * 2020-04-03 2020-08-04 中国农业大学 一种指针式仪表倾斜校正方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685206A (zh) * 2012-09-25 2014-03-26 阿里巴巴集团控股有限公司 识别信息的生成方法和系统
CN106231104A (zh) * 2016-08-03 2016-12-14 广东乐源数字技术有限公司 一种手环与智能手机绑定的方法
EP3493110A1 (en) * 2017-11-29 2019-06-05 Samsung Electronics Co., Ltd. Electronic device recognizing text in image
CN111598096A (zh) * 2020-04-03 2020-08-28 维沃移动通信有限公司 一种图像处理方法及电子设备
CN112148124A (zh) * 2020-09-10 2020-12-29 维沃移动通信有限公司 图像处理方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4202607A4 *

Also Published As

Publication number Publication date
EP4202607A4 (en) 2024-01-17
US20230215200A1 (en) 2023-07-06
CN112148124B (zh) 2024-07-26
EP4202607A1 (en) 2023-06-28
CN112148124A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
US10755063B2 (en) Method and apparatus for detecting two-dimensional barcode
CN110784651B (zh) 一种防抖方法及电子设备
US10560624B2 (en) Imaging control device, imaging control method, camera, camera system, and program
CN109684980B (zh) 自动阅卷方法及装置
CN109101120B (zh) 图像显示的方法和装置
CN110059652B (zh) 人脸图像处理方法、装置及存储介质
WO2016145755A1 (zh) 智能终端的图片旋转的方法及智能终端
CN111368820A (zh) 文本标注方法、装置及存储介质
CN109165606B (zh) 一种车辆信息的获取方法、装置以及存储介质
CN107105166B (zh) 图像拍摄方法、终端和计算机可读存储介质
CN111656391B (zh) 一种图像校正方法和终端
WO2021197395A1 (zh) 图像处理方法及电子设备
WO2019100407A1 (zh) 基于图样中标志图形点坐标的转换关系定位终端屏幕
CN111652942B (zh) 摄像模组的标定方法、第一电子设备和第二电子设备
WO2022052956A1 (zh) 图像处理方法、装置及电子设备
CN110942064A (zh) 图像处理方法、装置和电子设备
CN106383679B (zh) 一种定位方法及其终端设备
WO2018066902A1 (en) Consistent spherical photo and video orientation correction
CN109729264B (zh) 一种图像获取方法及移动终端
CN112561809A (zh) 图像处理方法、装置及设备
JP2016110469A (ja) 協働動作をする情報処理装置、携帯式電子機器、および位置の特定方法
EP4137930A1 (en) Image display method and apparatus, terminal device, storage medium, and program product
JP6067040B2 (ja) 情報処理装置、情報処理方法及びプログラム
CN115623337B (zh) 跳转指示信息显示方法、装置、电子设备及存储介质
US12067697B2 (en) Method and device for correcting image, electronic equipment, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21866009

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021866009

Country of ref document: EP

Effective date: 20230322

NENP Non-entry into the national phase

Ref country code: DE