CN114911382A - Signature display method and device and related equipment and storage medium thereof - Google Patents

Signature display method and device and related equipment and storage medium thereof Download PDF

Info

Publication number
CN114911382A
CN114911382A CN202210488405.9A CN202210488405A CN114911382A CN 114911382 A CN114911382 A CN 114911382A CN 202210488405 A CN202210488405 A CN 202210488405A CN 114911382 A CN114911382 A CN 114911382A
Authority
CN
China
Prior art keywords
signature
user
target device
target
picture data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210488405.9A
Other languages
Chinese (zh)
Inventor
谢潮贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202210488405.9A priority Critical patent/CN114911382A/en
Publication of CN114911382A publication Critical patent/CN114911382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a signature display method, a signature display device, related equipment and a storage medium, wherein the method comprises the following steps: acquiring picture data acquired by target equipment; detecting whether the picture data meets a signature display condition; and displaying the virtual signature carrier on the picture data displayed by the target device in response to the picture data meeting the signature display condition. Through the mode, signature display can be achieved, occupation of real space is avoided, and the problems of material loss and cost are avoided.

Description

Signature display method and device and related equipment and storage medium thereof
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a signature display method and apparatus, and related devices and storage media.
Background
Currently, most activities are checking in on a physical entity board offline. However, due to activity or other factors, the physical entity check-in board may be lost and not be stored and displayed for a long time. In addition, the physical entity check-in board needs to occupy a certain space position, and has the problems of material loss and cost.
Disclosure of Invention
The application at least provides a signature display method and device and related equipment and storage media thereof.
A first aspect of the present application provides a signature display method, including: acquiring picture data acquired by target equipment; detecting whether the picture data meets a signature display condition; and displaying the virtual signature carrier on the picture data displayed by the target device in response to the picture data meeting the signature display condition.
Therefore, when the picture data collected by the target device meets the signature display condition, the virtual signature carrier is displayed on the picture data displayed by the target device. Therefore, when the image data collected by the target device meets the signature display condition, the user can see the signature of the displayed user on the image data displayed by the target device, and signature display in a virtual space is realized. Compare in the mode of signing through materials such as physical entity board, can avoid materials such as physical entity board to the occupation of true space, and can not have the loss and the cost problem of material, can demonstrate the signature for a long time.
Wherein, the signature display condition comprises that the picture data comprises a target object; displaying a virtual signature carrier on screen data displayed by a target device, comprising: and displaying a virtual signature carrier in the screen data, wherein the display position of the virtual signature carrier in the screen data is a preset position or is determined based on the position of the target object in the screen data.
Therefore, the virtual signature carrier can be displayed when the target object is shot, and the target object and the virtual signature carrier can be displayed in a linkage manner. In addition, the display position of the virtual signature carrier can be preset or determined according to the position of the target object, so that the flexibility of the display position of the virtual signature carrier is improved, closer association with the target object can be realized, and the effect of enhancing the display is improved.
The signature display method further comprises the following steps: responding to the situation that the current position of the target equipment is in the preset area, acquiring the signature of the user of the target equipment, and determining the display position of the signature of the user of the target equipment on the virtual signature carrier; wherein the virtual signature carrier presents the signature of the user of the target device at the presentation position.
Therefore, the user signature can be triggered based on the positioning of the user, and the intelligent signature of the user in a certain position range is realized.
Before the response that the current position of the target device is in the preset area and the signature of the user of the target device is acquired, the signature display method further includes: and responding to the positioning trigger operation of the user, and acquiring the current position of the target equipment.
Therefore, positioning can be achieved based on the positioning operation by the user.
Wherein, in response to the current location of the target device being within the preset area, obtaining a signature of a user of the target device includes: responding to signature triggering operation of a user and displaying a signature editing area when the current position is in a preset area; a signature of the user of the target device is generated based on strokes entered by the user in the input area of the signature editing region.
Therefore, the user signature can be triggered by combining the positioning of the user and the signature operation of the user, so that the signature triggering process is more humanized. Moreover, the user can generate the signature by inputting the signature strokes on the target device, namely, the signature is directly realized on the device used by the user, and the signature does not need to be carried out on an additional device or a carrier, so that the convenience of the signature is improved.
Wherein generating the signature of the user of the target device based on the strokes input by the user in the input area of the signature editing area comprises: and displaying strokes input by a user in the input area of the signature editing area according to a first display parameter, wherein the first display parameter is preset or determined based on the selection of the user.
Therefore, flexible setting of signature display can be achieved, and user experience is improved.
After displaying the strokes input by the user in the input area of the signature editing area according to the first display parameter, the method further comprises the following steps: adjusting the strokes displayed in the input area to a second display parameter in response to the adjustment operation of the user; and acquiring the signature of the user by using the strokes displayed by the second display parameter.
Therefore, after the user inputs the signature, the user can further adjust the display parameters of the signature, so that the signature is displayed according to the display parameters adjusted by the user, and the user experience is improved.
The first display parameter comprises at least one of color, display angle and font, wherein the display angle is used for determining the angle of the virtual signature carrier for displaying the signature.
Thus, different display parameters can be set.
The second display parameter comprises at least one of color, display angle and font.
Thus, different display parameters may be adjusted.
The user's adjustment operation is the user's selection operation in the parameter configuration area of the signature editing area.
Therefore, the display parameters can be adjusted by selecting the display parameters in the parameter configuration area, and the adjustment of the display parameters is facilitated.
After acquiring the image data acquired by the target device, the signature display method further comprises the following steps: and displaying the picture data on the target equipment, and displaying the positioning item and/or the signature item on the picture data.
Therefore, the related function triggering items are displayed on the picture data acquired by the target equipment, so that a user can conveniently perform related function operation while acquiring pictures, and the user experience is improved.
The positioning triggering operation comprises the triggering operation of a user on a positioning item; the signature trigger operation includes at least one of: the method comprises the steps of triggering operation of a user on a signature item and preset touch operation of the user on a virtual signature carrier displayed by a target device.
Therefore, the target equipment can be positioned by triggering the positioning item, and the positioning is realized in a simple and easy-to-operate manner. In addition, the triggering of the signature may be implemented in different ways.
Wherein, obtaining the current position of the target device comprises: positioning the target equipment based on the target map data and the currently acquired picture data of the target equipment, or sending the currently acquired picture data of the target equipment to positioning processing equipment so that the positioning processing equipment positions the target equipment based on the target map data and the currently acquired picture data of the target equipment; and responding to the success of positioning, and obtaining the current position obtained by positioning.
Therefore, the target equipment is positioned by using the target map data, and the positioning accuracy of the target equipment is improved.
Before obtaining the current location of the target device, the signature display method further includes: detecting that the collected picture data does not accord with the positioning requirement; displaying first guide information, wherein the first guide information is used for instructing a user to adjust the angle of the target equipment; and in response to a first acquisition operation by the user, re-acquiring the picture data of the target device.
Therefore, the first guidance information is displayed to guide the user to adjust the angle of the target device, so that the image data acquired by the target device is more suitable for positioning, and the success rate of positioning is improved.
After the target device is positioned based on the target map data and the currently acquired image data of the target device, the signature display method further comprises the following steps: displaying second guidance information, wherein the second guidance information is used for indicating a user to adjust the angle of the target equipment; responding to a second acquisition operation of the user, and acquiring new picture data of the target equipment; and the new picture data is utilized to execute the current collected picture data based on the target map data and the target equipment again, and the target equipment is positioned and the subsequent steps are carried out.
Therefore, the angle adjustment of the target equipment is guided by the user through displaying the second guiding information, so that the newly acquired visual data of the target equipment is more suitable for positioning, and the success rate of repositioning is improved.
Wherein determining a presentation position of a signature of a user of the target device on the virtual signature carrier comprises: according to a preset strategy, determining the display position of the signature from the region which does not display the signature on the virtual signature carrier; or acquiring the position selected by the user on the virtual signature carrier as the display position of the signature.
Thus, the position of the presentation of the signature on the virtual signature carrier can be determined in different ways.
A second aspect of the present application provides a signature display apparatus, including: the acquisition module is used for acquiring the picture data acquired by the target equipment; the detection module is used for detecting whether the picture data meets the signature display condition; and the display module is used for responding to the picture data meeting the signature display condition and displaying the virtual signature carrier on the picture data displayed by the target equipment.
A third aspect of the present application provides an electronic device, which includes a memory and a processor, where the memory stores program instructions, and the processor is configured to execute the program instructions to implement the signature exposure method described above.
A fourth aspect of the present application provides a computer-readable storage medium for storing program instructions that can be executed to implement the signature exposure method described above.
According to the scheme, when the image data acquired by the target equipment meets the signature display condition, the virtual signature carrier is displayed on the image data displayed by the target equipment. Therefore, when the image data collected by the target device meets the signature display condition, the user can see the signature of the displayed user on the image data displayed by the target device, and signature display in a virtual space is realized. Compare in the mode of signing through materials such as physical entity board, can avoid materials such as physical entity board to the occupation of true space, and can not have the loss and the cost problem of material, can demonstrate the signature for a long time.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of an embodiment of a signature display method provided in the present application;
FIG. 2 is a schematic diagram of one embodiment of a target device display interface provided herein;
FIG. 3 is a schematic diagram of another embodiment of a target device display interface provided by the present application;
FIG. 4 is a schematic view of an embodiment of a default region provided herein;
FIG. 5 is a schematic flow chart diagram illustrating one embodiment of obtaining a signature of a user of a target device provided herein;
FIG. 6 is a flow chart illustrating an embodiment of adjusting display parameters provided herein;
FIG. 7 is a flowchart illustrating an embodiment of obtaining a current location of a target device provided herein;
FIG. 8 is a schematic diagram illustrating one embodiment of physical space video material provided herein;
FIG. 9 is a schematic view of one embodiment of target map data provided herein;
FIG. 10 is a schematic diagram of an embodiment of a signature display apparatus provided in the present application;
FIG. 11 is a schematic structural diagram of an embodiment of an electronic device provided in the present application;
FIG. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The embodiments of the present application will be described in detail below with reference to the drawings.
In the following description, for purposes of explanation rather than limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In an embodiment, the signature display method described herein may be that a user executes a related operation on a terminal device, the terminal device generates a corresponding operation request according to the acquired related operation, and sends the operation request to other non-operating devices, for example, a positioning processing device, and the other non-operating devices execute the related operation in response to the operation request. That is, in this embodiment, the execution subject is another non-operation device, and the related operation in response to the user refers to a related operation request issued in response to a terminal device to which the other non-operation device is connected, according to the related operation of the user. It is to be understood that, in other embodiments, the signature display method described herein may also be implemented by a user executing a related operation on a terminal device, and the terminal device responds according to the obtained related operation, that is, the terminal device responds to the related operation and executes the related operation. That is, in this embodiment, the execution subject is a terminal device, and the responding to the relevant operation of the user means that the responding user performs the relevant operation on the terminal device. The terminal device includes, but is not limited to, a mobile phone, a tablet computer, a computer, and the like, and is not limited specifically herein.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a signature displaying method according to an embodiment of the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 1 is not limited in this embodiment. As shown in fig. 1, the present embodiment includes:
step S11: and acquiring picture data acquired by the target equipment.
The method of the embodiment is used for displaying a virtual signature carrier for showing the signature of the user on the picture data displayed by the target device. The target device described herein may be, but is not limited to, a mobile phone, a tablet computer, a computer, etc., and is not specifically limited herein.
In an embodiment, when the execution subject is a positioning processing device, the target device sends the acquired image data to the positioning processing device, so that the positioning processing device acquires the image data acquired by the target device. It is to be understood that, in other embodiments, when the execution subject is a terminal device, that is, a target device, the image data is collected by a camera of the target device, so that the target device acquires the collected image data.
In one embodiment, the image data needs to be acquired by a camera of the target device, so that the image data acquired by the target device is acquired. Therefore, before opening corresponding software or small programs on the target equipment to acquire picture data, responding to the triggering access operation of a user on the software or small programs, and displaying inquiry information on the target equipment, wherein the inquiry information is used for inquiring whether the user allows the software or small programs to access the camera; and responding to the confirmation operation of the user, and carrying out picture data acquisition and subsequent steps.
Step S12: whether the picture data satisfies the signature display condition is detected.
In the present embodiment, it is detected whether or not the screen data satisfies the signature display condition. Upon detecting that the screen data satisfies the signature display condition, step S13 is executed. The signature display condition is not limited, and can be specifically set according to actual use requirements.
In one embodiment, the signature display condition is that the image data includes the target object, that is, when the image data collected by the target device includes the target object, the signature display condition is satisfied. The target object is not limited, and can be specifically set according to actual use requirements. For example, the object is a wall or a display whiteboard. For example, taking the target object as an a-wall as an example, when the picture data acquired by the target device includes the a-wall for subsequently carrying the virtual signature carrier, the signature display condition is satisfied.
It is to be understood that, in other embodiments, the signature display condition may also be that the image data includes a complete target object, that is, when the image data acquired by the target device includes a complete target object, the signature display condition is satisfied. For example, taking an example that the target object is a B display whiteboard, when the image data acquired by the target device includes a complete B display whiteboard for subsequently bearing a virtual signature carrier, the signature display condition is satisfied; and when the image data collected by the target equipment comprises a part B display whiteboard for bearing a virtual signature carrier subsequently, the signature display condition is not met. Of course, the signature display condition may also be that the image data acquired by the target device is clear or the image data is clear and includes the target object, etc., and is not limited herein and may be specifically set according to the actual use requirement.
Step S13: and displaying the virtual signature carrier on the picture data displayed by the target device in response to the picture data meeting the signature display condition.
In this embodiment, in response to the screen data satisfying the signature display condition, a virtual signature carrier is displayed on the screen data displayed by the target device, where the virtual signature carrier is used to display the signature of the user. That is to say, when the picture data acquired by the target device meets the signature display condition, a virtual signature carrier for displaying the signature of the user can be displayed on the picture data displayed by the target device, so that the signature of the user can be displayed on the virtual signature carrier in an augmented reality manner, that is, the signature can be displayed in a virtual space. Compared with the mode of signing on the physical entity board and displaying the signature through the physical entity board, the method and the device can avoid the occupation of materials such as the physical entity board on the vacuum space; in addition, because the signature display is not carried out through actual materials, the problems of material loss and cost are solved, and the signature can be displayed and stored for a long time.
For example, as shown in fig. 2, fig. 2 is a schematic diagram of an embodiment of a display interface of a target device provided in the present application, taking the target device as a mobile phone for example, collecting picture data by a camera of the mobile phone, and displaying the collected picture data on the display interface of the mobile phone, that is, as shown in fig. 2- (a), displaying a live view of the camera of the mobile phone on the display interface of the mobile phone; as shown in fig. 2- (b), in response to the picture data collected by the mobile phone camera, that is, in response to the mobile phone camera real scene satisfying the signature display condition, the virtual signature carrier is displayed on the mobile phone camera real scene.
In the above embodiment, when the screen data collected by the target device satisfies the signature display condition, the virtual signature carrier is displayed on the screen data displayed by the target device. Therefore, when the image data collected by the target device meets the signature display condition, the user can see the signature of the displayed user on the image data displayed by the target device, and signature display in a virtual space is realized. Compare in the mode of signing through materials such as physical entity board, can avoid materials such as physical entity board to the occupation of true space, and can not have the loss and the cost problem of material, can demonstrate the signature for a long time.
In one embodiment, the signature display condition includes that the target object is included in the picture data, and the virtual signature carrier is displayed in the picture data displayed by the target device. That is to say, when the picture data displayed by the target device includes the target object, the user can see the virtual signature carrier in the picture data displayed by the target device, that is, the user can see the virtual signature carrier for displaying the signature of the user through the target device and display the virtual signature carrier on the target object, that is, the user can see the signature of the user through the target device and display the signature on the target object, so that the virtual signature carrier is displayed when the target object is shot, the linked display between the target object and the virtual signature carrier is realized, materials such as a signature entity board and the like are not needed, the occupation of materials such as a physical entity board and the like to a real space can be avoided, the problems of material loss and cost cannot exist, and the signature can be displayed for a long time.
In a specific embodiment, the display position of the virtual signature carrier in the screen data displayed by the target device may be a preset position, so that the flexibility of the display position of the virtual signature carrier is improved, the preset position is not limited, and the virtual signature carrier may be specifically set according to actual use requirements. For example, the preset position is set in the screen data, centered on the left and right sides of the screen data; alternatively, the predetermined position may be set near the upper left corner of the screen data. For example, as shown in fig. 2- (b), taking the target device as a mobile phone, the target object as a wall a, and the preset position as the middle of the left and right sides of the picture data in the picture data, the real scene of the mobile phone camera is the picture data displayed by the target device; when the mobile phone camera live-action comprises the wall A, namely when the picture data meets the signature display condition, the virtual signature carriers are displayed in the middle of the left side and the right side of the mobile phone camera live-action.
In other specific embodiments, the display position of the virtual signature carrier in the picture data may also be determined based on the position of the target object in the picture data, so that the flexibility of the display position of the virtual signature carrier is improved, closer association with the target object can be achieved, and the effect of enhancing the display is improved. For example, as shown in fig. 3, fig. 3 is a schematic diagram of another embodiment of a display interface of a target device provided in the present application, and taking the target device as a mobile phone and a target object as an a wall as an example, a real scene of a camera of the mobile phone is image data displayed by the target device; part of A wall in the mobile phone camera live-action is in the upper left corner area, and as a user needs to see that the virtual signature carrier is displayed on the part of A wall through the target equipment, the display position of the virtual signature carrier in the mobile phone camera live-action is in the upper left corner and in the area including the part of A wall; wherein, the virtual signature carrier can be complete, i.e. including the signatures of all users; alternatively, in order to make the user clearly see the signature of the user, the virtual signature carrier may be a signature including only part of the user, for example, the position of the signature of the part of the user in the virtual signature carrier corresponds to the position of the part a wall in the whole a wall.
It should be noted that, for example, when the image data acquired by the target device includes the target object, after the user moves the target device and the image data acquired by the target device does not include the target object, the image data acquired by the target device does not satisfy the signature display condition, so that the virtual signature carrier is not displayed on the image data displayed by the target device. For example, as shown in fig. 3, taking the target device as a mobile phone and the target object as a wall a as an example, the real scene of the mobile phone camera is the image data displayed by the target device; the virtual signature carrier is displayed in the actual scene of the mobile phone camera originally, when a user moves the mobile phone and the actual scene of the mobile phone camera does not include the wall A any more, the picture data acquired by the mobile phone camera does not meet the signature display condition, the virtual signature carrier is not displayed in the actual scene of the mobile phone camera at the moment, and only the image acquired by the mobile phone camera is displayed.
In an embodiment, the signature of the user of the target device can also be displayed on the virtual signature carrier by obtaining the signature of the user of the target device, that is, the signature of the user can be triggered based on the positioning of the user, so that the intelligent signature of the user within a certain position range is realized. Specifically, in response to that the current position of the target device is within the preset area, the signature of the user of the target device is acquired, and the display position of the signature of the user of the target device on the virtual signature carrier is determined, so that the signature of the user of the target device is displayed at the determined display position on the virtual signature carrier. That is to say, after it is determined that the current location of the target device is within the preset area, that is, after it is determined that the current location of the target device is within the area range capable of performing signature, the signature of the user of the target device is acquired and the display location of the signature on the virtual signature carrier is determined, so that when the picture data acquired by the target device meets the signature display condition, the user can see the signature of the user on the virtual signature carrier displayed by the target device. The range and the shape of the preset area are not limited, and the preset area can be specifically set according to actual use requirements. As shown in fig. 4, fig. 4 is a schematic diagram of an embodiment of a preset area provided in the present application, taking a signature display condition as an example that a target object is included in picture data, since a virtual signature carrier can be displayed on the picture data displayed by a target device only when the picture data acquired by the target device includes the target object, a circular area range formed by taking the target object as a center and taking a radius R as X1 can be taken as the preset area; that is, at this time, as long as the current location of the target device is within the circular area as shown in fig. 4, that is, the current location of the target device is within the preset area, the intelligent signature of the user can be realized.
In a specific embodiment, the user may directly upload the picture containing the self-signature stored in the target device, so as to obtain the signature of the user of the target device. As shown in fig. 5, fig. 5 is a schematic flowchart of an embodiment of obtaining a signature of a user of a target device according to the present application, and in other specific embodiments, the signature of the user of the target device is also generated in real time through input of the user in a signature editing area, so as to implement an intelligent signature of the user. The method specifically comprises the following substeps:
step S51: and responding to the signature triggering operation of the user and displaying a signature editing area when the current position is in a preset area.
In this embodiment, in response to the signature triggering operation of the user and the current position being within the preset region, the signature editing region is displayed, so that the subsequent user can perform stroke input in the signature editing region to generate the signature of the user of the target device. By combining the positioning of the user and the signature operation of the user, the signature of the user can be triggered, so that the signature triggering process is more humanized.
In one embodiment, as shown in fig. 2- (b), after acquiring the screen data collected by the target device, the screen data is displayed on the target device, and the "signature" item is displayed on the screen data. Wherein, the user can trigger the 'signature' item displayed on the screen data in fig. 2- (b) to complete the signature trigger operation, and in response to the user triggering the 'signature' item, the signature editing area shown in fig. 2- (c) is displayed on the target device. In other embodiments, the user may also perform a preset touch operation on the virtual signature carrier displayed by the target device to complete the signature trigger operation, where the preset touch operation is not limited and may be specifically set according to actual use requirements. For example, as shown in fig. 2- (a) and 2- (b), after the user triggers the "signature" item on fig. 2- (a), the interface shown in fig. 2- (b) is correspondingly displayed, and when the mobile phone camera live-action includes the target object, the virtual signature carrier is correspondingly displayed on fig. 2- (b); the preset touch operation may be that the user touches an area, which does not display a signature, on the virtual signature carrier displayed by the target device, or the user touches any area, etc., on the virtual signature carrier displayed by the target device. For example, taking the preset touch operation as an example that the user triggers any area on the virtual signature carrier displayed by the target device, in response to the user touching any position on the virtual signature carrier as shown in fig. 2- (b), the signature editing area as shown in fig. 2- (c) is displayed on the target device. It will be appreciated that in other embodiments, the signature edit section may also be called in the form of speech. For example, taking the execution subject as the target device location as an example, the user sends out voice information related to a signature (for example, the signature is required now), and the target device receives the voice information and parses the voice information to know the intention of the user, so as to display a signature editing area on the target device.
In a specific embodiment, after the user triggers the signing operation, portrait information of the user of the target device may be collected, and whether the user of the target device is a signed user is determined by matching the collected portrait information of the user with portrait information of a user who has completed signing, so as to prevent the user of the target device from repeatedly signing. When the user of the target equipment is determined to be an unsigned user and the current position is in the preset area, displaying a signature editing area on the target equipment; and when the user of the target device is determined to be the signed user, displaying related prompt information such as 'you have finished signing' on the target device.
In a specific embodiment, the permission setting may also be performed, and after the user triggers the signing operation, related information of the user (for example, a mobile phone number or other account information) is acquired, so as to avoid signature by an unrelated person. For example, taking the execution subject as the target device, when the user triggers the signing operation and the current location is in the preset area, the target device displays the authority information input area, after the user inputs the related authority information, the target device receives the input authority information and matches the stored authority information corresponding to the related personnel, if the matching is successful, the user of the target device is the related personnel capable of signing, and at this time, the signature editing area is displayed on the target device.
Step S52: a signature of a user of the target device is generated based on strokes entered by the user in an input area of the signature editing area.
In the present embodiment, the signature of the user of the target device is generated based on the strokes input by the user in the input area of the signature editing area. The signature is directly realized on the equipment used by the user, and the signature does not need to be carried out on extra equipment or carriers, so that the convenience of the signature is improved.
In one embodiment, strokes entered by the user in the input area of the signature editing area are displayed according to the first display parameter. The first display parameter may be preset, that is, when the user does not select the display parameter, the first display parameter is default to the preset display parameter, and after the user inputs a stroke in the input area of the signature editing area, the stroke input by the user is displayed with the preset first display parameter, so that the signature of the user of the target device is subsequently generated based on the input stroke. Of course, the first display parameter may also be determined based on a user's selection. Specifically, as shown in fig. 2- (c), the signature editing area further includes a parameter configuration area, and the user can configure the display parameters in the parameter configuration area, so that the input strokes are displayed according to the first display parameter determined by the user, flexible setting of signature display can be realized, and user experience is improved.
In a specific embodiment, the first display parameter includes at least one of a color, a display angle, and a font, where the display angle is used to determine an angle at which the virtual signature carrier displays the signature, that is, the signature is displayed on the virtual signature carrier at the display angle. Of course, in other embodiments, the first display parameter may also include other parameter types, which are not limited in this respect.
Referring to fig. 6, fig. 6 is a schematic flowchart of an embodiment of adjusting display parameters provided in the present application, and in an embodiment, a user may further adjust a generated signature of a target device, so that the signature is displayed according to the display parameters adjusted by the user, thereby improving user experience. The method specifically comprises the following substeps:
step S61: and adjusting the strokes displayed in the input area to the second display parameter in response to the adjustment operation of the user.
In this embodiment, in response to an adjustment operation by the user, the stroke displayed in the input area is adjusted to the second display parameter. That is, the user can adjust the display parameters of the signature of the target device displayed in the input area. In one embodiment, the adjustment operation by the user is a selection operation of the user in the parameter configuration area of the signature editing area, that is, the user can select a display parameter to be adjusted in the parameter configuration area of the signature editing area, thereby completing the adjustment of the display parameter. For example, as shown in fig. 2- (c), the color of the signature of the user of the target device is taken as the display parameter to be adjusted; according to the first display parameter, displaying that the signature of the user on the target device of the signature editing area is yellow, and because the user wants the signature to be adjusted to be black, the user can select black in the color parameters in the parameter configuration area of the signature editing area at the moment to finish the adjustment operation of the signature color; the signature color subsequently displayed in the input area of the signature editing area is black.
In a specific embodiment, the second display parameter includes at least one of a color, a display angle, and a font. Of course, in other embodiments, the second display parameter may also include other parameter types, which are not limited in this respect.
Step S62: and acquiring the signature of the user by using the strokes displayed by the second display parameter.
In this embodiment, the user's signature is obtained using the strokes displayed by the second display parameter. That is, the strokes displayed according to the display parameters adjusted by the user are taken as the signature of the user of the target device.
In one embodiment, as shown in fig. 2- (c), in order to ensure the correctness of the obtained user's signature, a "confirm" item is further displayed on the signature editing area, and in response to the user's trigger operation on the "confirm" item, the signature of the user of the target device is obtained, so that the signature confirmed by the user can be subsequently shown at the display position on the virtual signature carrier.
In one embodiment, as shown in fig. 2- (c), a "clear" item is also set in the signature editing region. In response to a user's trigger action on the "clear" item, the signed stroke in the input area is deleted to enable the user to re-enter the signed stroke.
In one embodiment, the display position of the signature may be determined from an area on the virtual signature carrier where the signature is not displayed according to a preset policy. That is, the target device or the location processing device may assign, according to a preset policy, a location for showing a signature to the signature of the user of the target device in an area on the virtual signature carrier where the signature is not set. The preset strategy is not limited, and can be specifically set according to actual use requirements. In other embodiments, the location selected by the user on the virtual signature carrier is obtained and used as the presentation location of the signature of the user of the target device. For example, as shown in fig. 2- (b), the user performs a preset touch operation at a certain position of the displayed virtual signature carrier, and if the area corresponding to the preset touch operation is an area not showing the signature, that is, if there is no signature in the area corresponding to the preset touch operation, the position can be used as the showing position of the signature; as shown in fig. 2- (d), the acquired signature of the user of the target device is displayed at the position of the virtual signature carrier. The method comprises the following steps that a signature can be displayed in a certain area with the position selected by a user on a virtual signature carrier as the center; of course, the user may also display the signature in a certain area formed by the vertex at the top left corner at the position selected on the virtual signature carrier.
In a specific embodiment, as shown in fig. 2- (c), the signature editing area further comprises a "return" item, which can be triggered to return to the interface as shown in fig. 2- (b) when the user is not satisfied with the position of the user's selection on the virtual signature carrier, and the user can make a position selection on the virtual signature carrier again to determine a new display position of the signature of the user of the target device.
In one embodiment, before acquiring the signature of the user of the target device in response to the current location of the target device being within the preset area, the current location of the target device needs to be acquired to determine whether the current location of the target device is within the preset area. Specifically, the current position of the target device is acquired in response to a positioning trigger operation of the user. In a specific embodiment, as shown in fig. 2- (a), after the picture data collected by the target device is acquired, the picture data is displayed on the target device, and a "positioning" item is displayed on the picture data, and a user can perform a triggering operation on the "positioning" item to position the target device so as to acquire the current location of the target device. The target equipment can be positioned by triggering the positioning item, and the positioning mode is simple and easy to operate. In other embodiments, the positioning may also be triggered by voice. For example, taking the execution subject as the target device location as an example, the user sends out voice information related to positioning (for example, positioning is now required), and the target device receives and parses the voice information to learn the intention of the user, thereby triggering positioning to obtain the current location of the target device.
In other specific embodiments, as shown in fig. 2- (a), after the picture data collected by the target device is acquired, the picture data is displayed on the target device, and a "signature" item is displayed on the picture data, and a user can invoke the current location of the target device and the signature editing area by one key through a trigger operation on the "signature" item. It should be noted that, in the process of implementing one-key implementation, the current location of the target device is obtained first, and when it is determined that the current location of the target device is located in the preset area, the signature editing area is displayed on the target device, so that a user of the target device can sign the signature. That is to say, when the current location of the target device is not in the preset area, the user can only obtain and locate the current location of the target device by triggering the "signature" item, but cannot trigger the display of the signature editing area on the target device, that is, cannot perform signature when the current location of the target device is not in the preset area. It can be understood that, in an embodiment, after acquiring the picture data acquired by the target device, the picture data is displayed on the target device, and the "signature" item and the "location" item are simultaneously displayed on the picture data, the user may acquire the current location of the target device by triggering the "location" item and invoke the signature editing area by triggering the "signature" item, and when it is determined that the current location of the target device is not in the preset area or when the current location of the target device fails to be acquired due to other reasons, that is, when the target device fails to be located, the trigger of the user on the signature item is invalid, that is, when the target device is not in the preset area or fails to be located, the user of the target device cannot sign. The related function triggering items are displayed on the picture data acquired by the target equipment, so that a user can conveniently perform related function operation while acquiring pictures, and the user experience is improved.
In the process of acquiring the picture data of the target device, the situation of picture data acquisition error and the like may exist, so that subsequent positioning based on the picture data fails. Therefore, in an embodiment, the first guidance information is displayed on the target device, and the first guidance information is used for instructing a user to adjust an angle of the target device so as to accurately acquire the picture data, thereby avoiding the user from performing an acquisition operation when the target device is at an incorrect angle, for example, the user aims the target device at the ground to acquire the picture data, so that the picture data acquired by the target device is more suitable for positioning, and the success rate of subsequent positioning is improved. And after the user adjusts the angle of the target equipment according to the first guide information, acquiring picture data of the target equipment in response to the first acquisition operation of the user. In one embodiment, the first guidance message may be a text message, which states how to perform the angle adjustment of the target device. In other specific embodiments, the first guidance information may also be picture information, and how to adjust the angle of the target device is shown more intuitively in a picture form; of course, the first guidance information may also be in the form of animation, and is not particularly limited herein.
In an embodiment, after receiving the picture data sent by the target device, the positioning processing device may be configured to position the target device according to the received picture data to determine the current location of the target device. It can be understood that in other embodiments, the target device may also acquire the screen data, and the target device directly locates the target device according to the acquired screen data, that is, the target device performs local location.
In one embodiment, the target device may be positioned at the current position based on the screen data of the target device by using the target map data, so as to improve the positioning accuracy. The target map data can be high-precision map data, and the target equipment is positioned through the high-precision map data and the picture data of the target equipment, so that the positioning precision of the target equipment can be improved; of course, the target map data may be other map data such as 2D, and is not particularly limited herein. It is to be understood that, in other embodiments, the target device may also be located based on the image data of the target device by using a GPS or bluetooth method, which is not limited herein and may be specifically set according to actual use requirements.
Referring to fig. 7, fig. 7 is a flowchart illustrating an embodiment of obtaining a current location of a target device according to the present application. It should be noted that, if the result is substantially the same, the flow sequence shown in fig. 7 is not limited in this embodiment. As shown in fig. 7, in this embodiment, the implementation subject is a target device, and determining the current location of the target device by using high-precision map data specifically includes:
step S71: and positioning the target equipment based on the target map data and the currently acquired picture data of the target equipment.
In this embodiment, the target device is located by using the currently acquired image data and target map data of the target device. Specifically, the acquired image data currently acquired by the target device is matched with the target map data to position the target device.
The target map data is high-precision map data, and the target equipment is positioned through the high-precision map data and the picture data of the target equipment, so that the positioning precision of the target equipment can be improved. In a specific embodiment, as shown in fig. 8 to 9, fig. 8 is a schematic diagram of an embodiment of a video material in a physical space provided by the present application, and fig. 9 is a schematic diagram of an embodiment of target map data provided by the present application, where a video capture tool such as a mobile phone or a panoramic camera is used to capture a video material in a target physical space, that is, a space that needs to be located; then, 3D modeling of the entire target physical space is completed based on the collected video material by using an sfm (structure From motion) algorithm, i.e., a 3D reconstruction algorithm or other related reconstruction algorithms, so that 3D physical space modeling of a size of 1:1 is obtained, that is, high-precision map data of the target physical space as shown in fig. 9 is obtained. When a video material of a target physical space is acquired through a video acquisition tool such as a mobile phone or a panoramic camera, all scenes in the target physical space can be acquired once to ensure that the acquired video material comprises all scenes; in addition, video materials can be repeatedly collected in a certain scene, and the accuracy of high-precision map data constructed subsequently can be improved.
In one embodiment, the high-precision map can be deployed locally, that is, on the target device, so that the execution subject is the target device, and the target device matches the screen data with the high-precision map data to locate the target device, thereby realizing local location. Since the high-precision map is deployed in the installation package which needs to store the high-precision map data in the target device locally, and the installation package is large, the installation package needs to be loaded when the positioning software or the applet is used, so that the target device runs slowly, in an embodiment, the high-precision map can also be deployed in a positioning processing device, namely a positioning processing device, at this time, an execution main body is the positioning processing device, and the positioning processing device matches the picture data with the high-precision map data to position the target device.
Step S72: and responding to the success of positioning, and obtaining the current position obtained by positioning.
In the embodiment, in response to successful positioning, that is, in response to successful matching between the acquired image data currently acquired by the target device and the target map data, the current position obtained by positioning is acquired, and the target device is positioned. Specifically, as shown in fig. 2- (b), the obtained image data currently acquired by the target device is successfully matched with the target map data deployed on the positioning processing device or the target device, that is, the current position of the target device can be obtained, and relevant information such as successful positioning and the like is displayed on the target device. In one embodiment, after the target device is successfully located, the successful location is prompted to inform the user that the location is successful.
In one embodiment, in response to a positioning failure, that is, in response to a failure in matching the acquired image data currently acquired by the target device with the target map data, a positioning failure is prompted. In one embodiment, a prompt for a failed location may be displayed on the target device. It can be understood that in other specific embodiments, the prompt information of the positioning failure can also be played in a voice broadcast manner, which is not limited herein and can be specifically set according to the actual use requirement. In one embodiment, the target device is directly relocated in response to a positioning failure. Alternatively, in other embodiments, the target device is relocated after prompting the location failure, i.e., informing the user of the location failure.
The failure of positioning may be caused by error or insufficient accuracy of the currently acquired image data of the target device. Therefore, in an embodiment, the second guidance information is displayed on the target device, and the second guidance information is used for instructing the user to adjust the angle of the target device, so that the user can adjust the angle of the target device to accurately acquire the picture data, the picture data acquired by the target device is more suitable for positioning, and the success rate of subsequent relocation is improved. And after the user adjusts the angle of the target equipment according to the second guidance information, acquiring new picture data of the target equipment in response to a second acquisition operation of the user. In a specific embodiment, the second guidance information may be text information, and how to perform the angle adjustment of the target device is stated by the text information. It is to be understood that, in other specific embodiments, the second guidance information may also be picture information, which shows how to perform the angle adjustment of the target device more intuitively in a picture form; of course, the second guidance information may also be in the form of animation, and is not particularly limited herein.
Further, step S71 and its subsequent steps are re-executed with the new screen data, i.e., the target device is relocated with the new screen data and the target map data. Specifically, the acquired new picture data of the target device is matched with the target map data to locate the target device. In one embodiment, if the number of times of repositioning the target device exceeds the preset number of times, the target device may not be positioned because the current location of the target device is not within the preset area, that is, the target device is not within the target map data. At this time, prompt information for going to a preset area for positioning is displayed on the target device or broadcasted through voice.
In another embodiment, when the execution subject is a positioning processing device, that is, when the positioning processing device is used to execute related positioning and repositioning operations, after the target device acquires picture data of the target device, the picture data acquired by the target device is sent to the positioning processing device, and the positioning processing device receives the picture data sent by the target device, so that the positioning processing device repositions the target device based on the target map data and the picture data currently acquired by the target device.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a signature display apparatus provided in the present application. The signature exhibition apparatus 100 includes an acquisition module 101, a detection module 102, and a first display module 103. The acquisition module 101 is configured to acquire image data acquired by a target device; the detection module 102 is configured to detect whether the picture data satisfies a signature display condition; the first display module 103 is configured to display a virtual signature carrier on the screen data displayed by the target device in response to the screen data satisfying the signature display condition.
Wherein, the signature display condition comprises that the picture data comprises a target object; the first display module 103 is configured to display a virtual signature carrier on the picture data displayed by the target device, and specifically includes: and displaying a virtual signature carrier in the screen data, wherein the display position of the virtual signature carrier in the screen data is a preset position or is determined based on the position of the target object in the screen data.
The signature display apparatus 100 further includes a determining module 104, where the determining module 104 is specifically configured to: responding to the situation that the current position of the target equipment is in the preset area, acquiring the signature of the user of the target equipment, and determining the display position of the signature of the user of the target equipment on the virtual signature carrier; wherein the virtual signature carrier presents the signature of the user of the target device at the presentation position.
The signature display apparatus 100 further includes a positioning module 105, where the positioning module 105 is configured to, before acquiring the signature of the user of the target device in response to the current location of the target device being within the preset area, specifically include: responding to the positioning trigger operation of a user, and acquiring the current position of the target equipment; the determining module 104 is configured to, in response to that the current location of the target device is within the preset area, obtain a signature of a user of the target device, and specifically include: responding to signature triggering operation of a user and displaying a signature editing area when the current position is in a preset area; a signature of a user of the target device is generated based on strokes entered by the user in an input area of the signature editing area.
The determining module 104 is configured to generate a signature of a user of a target device based on a stroke input by the user in an input area of a signature editing area, and specifically includes: displaying strokes input by a user in an input area of a signature editing area according to a first display parameter, wherein the first display parameter is preset or determined based on selection of the user; adjusting the strokes displayed in the input area to a second display parameter in response to the adjustment operation of the user; and acquiring the signature of the user by using the strokes displayed by the second display parameter.
The first display parameter comprises at least one of color, display angle and font, wherein the display angle is used for determining the angle of the virtual signature carrier for displaying the signature; and/or the second display parameter comprises at least one of color, display angle and font; and/or the user adjustment operation is a selection operation of the user in a parameter configuration area of the signature editing area.
The signature display apparatus 100 further includes a second display module 106, where the second display module 106 is configured to, after acquiring the picture data acquired by the target device, specifically include: displaying picture data on the target equipment, and displaying a positioning item and/or a signature item on the picture data; the positioning trigger operation comprises a trigger operation of a user on a positioning item, and/or the signature trigger operation comprises at least one of the following operations: triggering operation of a user on the signature item, and preset touch operation of the user on a virtual signature carrier displayed by the target device.
The positioning module 105 is configured to obtain a current location of the target device, and specifically includes: positioning the target equipment based on the target map data and the currently acquired picture data of the target equipment, or sending the currently acquired picture data of the target equipment to positioning processing equipment so that the positioning processing equipment positions the target equipment based on the target map data and the currently acquired picture data of the target equipment; and responding to the success of positioning, and obtaining the current position obtained by positioning.
The signature display apparatus 100 further includes an acquisition module 107, where the acquisition module 107 is configured to, before acquiring the current location of the target device, specifically include: detecting that the collected picture data does not accord with the positioning requirement; displaying first guide information, wherein the first guide information is used for instructing a user to adjust the angle of the target equipment; the first acquisition operation of the user is responded, and the picture data of the target equipment is acquired again; and/or, the acquisition module 107 is configured to, after positioning the target device based on the target map data and the currently acquired image data of the target device, specifically include: displaying second guide information, wherein the second guide information is used for indicating a user to adjust the angle of the target equipment; responding to a second acquisition operation of the user, and acquiring new picture data of the target equipment; and the new picture data is utilized to execute the current collected picture data based on the target map data and the target equipment again, and the target equipment is positioned and the subsequent steps are carried out.
The determining module 104 is configured to determine a display position of a signature of a user of the target device on the virtual signature carrier, and specifically includes: according to a preset strategy, determining the display position of the signature from the region which does not display the signature on the virtual signature carrier; or acquiring the position selected by the user on the virtual signature carrier as the display position of the signature.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of an electronic device provided in the present application. The electronic device 110 comprises a memory 111 and a processor 112 coupled to each other, and the processor 112 is configured to execute program instructions stored in the memory 111 to implement the steps of any of the above-described embodiments of the signature display method. In one particular implementation scenario, the electronic device 110 may include, but is not limited to: the electronic device 110 may further include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 112 is configured to control itself and the memory 111 to implement the steps of any of the signature display method embodiments described above. Processor 112 may also be referred to as a CPU (Central Processing Unit). The processor 112 may be an integrated circuit chip having signal processing capabilities. The Processor 112 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 112 may be commonly implemented by integrated circuit chips.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application. The computer readable storage medium 120 stores program instructions 121 capable of being executed by the processor, and the program instructions 121 are used for implementing the steps of any of the above-described embodiments of the signature display method.
The disclosure relates to the field of augmented reality, and aims to detect or identify relevant features, states and attributes of a target object by means of various visual correlation algorithms by acquiring image information of the target object in a real environment, so as to obtain an AR effect combining virtual and reality matched with specific applications. For example, the target object may relate to a face, a limb, a gesture, an action, etc. associated with a human body, or an identifier, a marker, or a sand table, a display area, a display item, etc. associated with an object, or a venue. The vision-related algorithms may involve visual localization, SLAM, three-dimensional reconstruction, image registration, background segmentation, key point extraction and tracking of objects, pose or depth detection of objects, and the like. The specific application can not only relate to interactive scenes such as navigation, explanation, reconstruction, virtual effect superposition display and the like related to real scenes or articles, but also relate to special effect treatment related to people, such as interactive scenes such as makeup beautification, limb beautification, special effect display, virtual model display and the like.
The detection or identification processing of the relevant characteristics, states and attributes of the target object can be realized through the convolutional neural network. The convolutional neural network is a network model obtained by performing model training based on a deep learning framework.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical division, and other divisions may be implemented in practice, for example, the unit or component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application, which are essential or contributing to the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present application, and is not intended to limit the scope of the present application, and all equivalent structures or equivalent processes performed by the present application and the contents of the attached drawings, which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (13)

1. A method for presenting a signature, the method comprising:
acquiring picture data acquired by target equipment;
detecting whether the picture data meets a signature display condition;
and responding to the picture data meeting the signature display condition, and displaying a virtual signature carrier on the picture data displayed by the target device.
2. The method according to claim 1, wherein the signature display condition includes that an object is included in the screen data; the displaying a virtual signature carrier on the picture data displayed by the target device includes:
and displaying the virtual signature carrier in the picture data, wherein the display position of the virtual signature carrier in the picture data is a preset position or is determined based on the position of the target object in the picture data.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
responding to the current position of the target equipment in a preset area, acquiring the signature of the user of the target equipment, and determining the display position of the signature of the user of the target equipment on the virtual signature carrier;
wherein the virtual signature carrier presents a signature of a user of the target device at the presentation location.
4. The method of claim 3, wherein prior to said obtaining a signature of a user of the target device in response to the current location of the target device being within a preset area, the method further comprises:
responding to the positioning trigger operation of a user, and acquiring the current position of the target equipment;
and/or, the obtaining of the signature of the user of the target device in response to the current location of the target device being within a preset area includes:
responding to signature triggering operation of a user and displaying a signature editing area when the current position is in the preset area;
generating a signature of a user of the target device based on strokes entered by the user in an input area of the signature editing region.
5. The method of claim 4, wherein generating the signature of the user of the target device based on the strokes entered by the user in the input area of the signature editing region comprises:
displaying strokes input by a user in an input area of the signature editing area according to a first display parameter, wherein the first display parameter is preset or determined based on the selection of the user;
adjusting the strokes displayed in the input area to a second display parameter in response to an adjustment operation of a user;
and obtaining the signature of the user by using the strokes displayed by the second display parameter.
6. The method of claim 5,
the first display parameter comprises at least one of color, display angle and font, wherein the display angle is used for determining the angle of the virtual signature carrier for displaying the signature;
and/or the second display parameter comprises at least one of color, display angle and font;
and/or the user's adjustment operation is the user's selection operation in the parameter configuration area of the signature editing area.
7. The method according to any one of claims 4 to 6, wherein after the acquiring the picture data acquired by the target device, the method further comprises:
displaying the picture data on the target equipment, and displaying a positioning item and/or a signature item on the picture data;
wherein the positioning trigger operation comprises a trigger operation of a user on the positioning item, and/or the signature trigger operation comprises at least one of the following operations: the user triggers the signature item, and the user performs preset touch operation on a virtual signature carrier displayed by the target device.
8. The method according to any one of claims 4 to 7, wherein the obtaining the current location of the target device comprises:
positioning the target equipment based on target map data and the picture data currently acquired by the target equipment, or sending the picture data currently acquired by the target equipment to positioning processing equipment so that the positioning processing equipment positions the target equipment based on the target map data and the picture data currently acquired by the target equipment;
and responding to the success of the positioning, and obtaining the current position obtained by the positioning.
9. The method of claim 8, wherein prior to said obtaining the current location of the target device, the method further comprises:
detecting that the collected picture data does not meet the positioning requirement;
displaying first guide information, wherein the first guide information is used for instructing a user to adjust the angle of the target device; the first acquisition operation of the user is responded, and the picture data of the target equipment is acquired again;
and/or after the target device is positioned based on the target map data and the picture data currently acquired by the target device, the method further comprises:
displaying second guidance information, wherein the second guidance information is used for indicating a user to adjust the angle of the target device; responding to a second acquisition operation of the user, and acquiring new picture data of the target equipment; and re-executing the image data based on the target map data and the current collection of the target equipment by using the new image data, and positioning the target equipment and subsequent steps thereof.
10. The method according to any one of claims 3 to 8, wherein the determining of the position of the presentation of the signature of the user of the target device on the virtual signature carrier comprises:
according to a preset strategy, determining the display position of the signature from the region which does not display the signature on the virtual signature carrier; alternatively, the first and second electrodes may be,
and acquiring the position selected by the user on the virtual signature carrier as the display position of the signature.
11. A signature presentation device, the device comprising:
the acquisition module is used for acquiring the picture data acquired by the target equipment;
the detection module is used for detecting whether the picture data meets a signature display condition;
and the display module is used for responding to the picture data meeting the signature display condition and displaying a virtual signature carrier on the picture data displayed by the target equipment.
12. An electronic device, comprising a memory storing program instructions and a processor for executing the program instructions to implement the signature presentation method as claimed in any one of claims 1-10.
13. A computer-readable storage medium for storing program instructions executable to implement the signature presentation method as claimed in any one of claims 1 to 10.
CN202210488405.9A 2022-05-06 2022-05-06 Signature display method and device and related equipment and storage medium thereof Pending CN114911382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210488405.9A CN114911382A (en) 2022-05-06 2022-05-06 Signature display method and device and related equipment and storage medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210488405.9A CN114911382A (en) 2022-05-06 2022-05-06 Signature display method and device and related equipment and storage medium thereof

Publications (1)

Publication Number Publication Date
CN114911382A true CN114911382A (en) 2022-08-16

Family

ID=82766296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210488405.9A Pending CN114911382A (en) 2022-05-06 2022-05-06 Signature display method and device and related equipment and storage medium thereof

Country Status (1)

Country Link
CN (1) CN114911382A (en)

Similar Documents

Publication Publication Date Title
US20130307766A1 (en) User interface system and method of operation thereof
DK2996015T3 (en) PROCEDURE TO USE IMPROVED REALITY AS HMI VIEW
KR101329882B1 (en) Apparatus and Method for Displaying Augmented Reality Window
CN108776970A (en) Image processing method and device
US20150378158A1 (en) Gesture registration device, gesture registration program, and gesture registration method
CN107392933B (en) Image segmentation method and mobile terminal
US20240078703A1 (en) Personalized scene image processing method, apparatus and storage medium
CN110853095B (en) Camera positioning method and device, electronic equipment and storage medium
CN113643356B (en) Camera pose determination method, virtual object display method, device and electronic equipment
CN109934931A (en) Acquisition image, the method and device for establishing target object identification model
CN103916593A (en) Apparatus and method for processing image in a device having camera
CN111105454A (en) Method, device and medium for acquiring positioning information
KR20210124313A (en) Interactive object driving method, apparatus, device and recording medium
CN106503682A (en) Crucial independent positioning method and device in video data
US20240051475A1 (en) Display adjustment method and apparatus
CN111061421B (en) Picture projection method and device and computer storage medium
CN106909219B (en) Interaction control method and device based on three-dimensional space and intelligent terminal
KR101426378B1 (en) System and Method for Processing Presentation Event Using Depth Information
CN114581525A (en) Attitude determination method and apparatus, electronic device, and storage medium
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus
US11758100B2 (en) Portable projection mapping device and projection mapping system
CN103984476B (en) menu display method and device
CN114911382A (en) Signature display method and device and related equipment and storage medium thereof
US10410429B2 (en) Methods and apparatus for three-dimensional image reconstruction
US20230326147A1 (en) Helper data for anchors in augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination