CN111932455B - Information sharing method and related product - Google Patents

Information sharing method and related product Download PDF

Info

Publication number
CN111932455B
CN111932455B CN202010753185.9A CN202010753185A CN111932455B CN 111932455 B CN111932455 B CN 111932455B CN 202010753185 A CN202010753185 A CN 202010753185A CN 111932455 B CN111932455 B CN 111932455B
Authority
CN
China
Prior art keywords
picture
information
sharing
key information
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010753185.9A
Other languages
Chinese (zh)
Other versions
CN111932455A (en
Inventor
李思龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Futu Network Technology Co Ltd
Original Assignee
Shenzhen Futu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Futu Network Technology Co Ltd filed Critical Shenzhen Futu Network Technology Co Ltd
Priority to CN202010753185.9A priority Critical patent/CN111932455B/en
Publication of CN111932455A publication Critical patent/CN111932455A/en
Application granted granted Critical
Publication of CN111932455B publication Critical patent/CN111932455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides an information sharing method and related products, wherein the method comprises the following steps: after receiving the sharing indication of the target object, the UE acquires all picture information of the sharing indication; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information areas to obtain all key information pictures; and the UE splices the key information pictures to obtain sharing pictures, and shares the sharing pictures to a sharing object. The technical scheme provided by the application has the advantage of improving the experience of financial information sharing.

Description

Information sharing method and related product
Technical Field
The application relates to the technical field of electronics and information, in particular to an information sharing method and related products.
Background
The information sharing refers to the communication and sharing of information and information products among information systems of different layers and different departments, namely the sharing of the information, namely the more obvious importance of the information in the Internet age, with other people, so as to achieve the purposes of more reasonably configuring the resources, saving social cost and creating more wealth. Is an important means for improving the utilization rate of information resources and avoiding repeated waste in information acquisition, storage and management. Compared with common information sharing, the financial information sharing has higher requirements on the criticality and timeliness of the information sharing, but the existing financial information sharing has more invalid (junk) information, so that the sharing requirements of the financial information cannot be met, and the experience of users on the financial information sharing is affected.
Disclosure of Invention
The embodiment of the application discloses an information sharing method and related products, which can meet the sharing requirement of financial information and improve the experience of users on the financial information sharing.
In a first aspect, an information sharing method is provided, where the method is applied to a user equipment UE, and the method includes the following steps:
after receiving the sharing indication of the target object, the UE acquires all picture information of the sharing indication;
The UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information areas to obtain all key information pictures;
And the UE splices the key information pictures to obtain sharing pictures, and shares the sharing pictures to a sharing object.
In a second aspect, there is provided a user equipment, the user equipment comprising:
the receiving and transmitting unit is used for receiving the sharing instruction of the target object;
the acquisition unit is used for acquiring all the picture information of the sharing indication;
The processing unit is used for identifying and determining key information areas in all the picture information, wherein the key information areas are partial areas in all the picture information; intercepting the key information areas to obtain all key information pictures; and splicing the key information pictures to obtain sharing pictures, and sharing the sharing pictures to a sharing object.
In a third aspect, there is provided a terminal comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of the first aspect.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to execute the method according to the first aspect.
A fifth aspect of the embodiments of the present application discloses a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program, the computer program being operable to cause a computer to perform part or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, after receiving the sharing indication of the target object, the UE acquires all picture information of the sharing indication; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information areas to obtain all key information pictures; and the UE splices the key information pictures to obtain sharing pictures, and shares the sharing pictures to the sharing objects. According to the technical scheme, the direction picture comprises the key information picture, redundant information is removed, so that sharing of invalid information is avoided, and the experience of financial information sharing is improved.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
Fig. 2 is a flow chart of an information sharing method according to an embodiment of the present application;
Fig. 2a is a schematic diagram of original picture information according to an embodiment of the present application;
fig. 3 is a flow chart of an information sharing method according to an embodiment of the application;
FIG. 3a is a schematic diagram of a head picture according to a first embodiment of the present application;
fig. 3b is a schematic diagram of a key information picture according to a first embodiment of the present application;
FIG. 3c is a schematic diagram of another key information picture according to the first embodiment of the present application;
FIG. 3d is a schematic diagram of another key information picture according to the first embodiment of the present application;
FIG. 3e is a schematic diagram of a sharing picture according to an embodiment of the present application;
FIG. 3f is a schematic diagram of another sharing image according to the first embodiment of the present application;
FIG. 3g is a schematic diagram of another sharing image according to the first embodiment of the present application;
fig. 4 is a schematic structural diagram of a user equipment according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The term "and/or" in the present application is merely an association relation describing the association object, and indicates that three kinds of relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship.
The term "plurality" as used in the embodiments of the present application means two or more. The first, second, etc. descriptions in the embodiments of the present application are only used for illustrating and distinguishing the description objects, and no order is used, nor is the number of the devices in the embodiments of the present application limited, and no limitation on the embodiments of the present application should be construed. The "connection" in the embodiment of the present application refers to various connection manners such as direct connection or indirect connection, so as to implement communication between devices, which is not limited in the embodiment of the present application.
The terminal in the embodiments of the present application may refer to various forms of UE, access terminal, subscriber unit, subscriber station, mobile station, MS (english: mobile station), remote station, remote terminal, mobile device, user terminal, terminal device (english: terminal equipment), wireless communication device, user agent, or user apparatus. The terminal device may also be a cellular phone, a cordless phone, a SIP (english: session initiation protocol, chinese: session initiation protocol) phone, a WLL (english: wireless local loop, chinese: wireless local loop) station, a PDA (english: personal DIGITAL ASSISTANT, chinese: personal digital processing), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a car-mounted device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved PLMN (english: public land mobile network, chinese: public land mobile communication network), etc., which the embodiments of the present application are not limited.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal disclosed in an embodiment of the present application, where the terminal 100 may be a user equipment UE, the terminal 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, the sensor 170 may include a camera, a distance sensor, a gravity sensor, and the like, and the electronic device may include two transparent display screens, where the transparent display screens are disposed on a back surface and a front surface of the electronic device, and part or all of components between the two transparent display screens may be transparent, so that the electronic device may be a transparent electronic device in visual effect, and if part of the components are transparent, the electronic device may be a hollowed-out electronic device. Wherein:
Terminal 100 may include control circuitry that may include storage and processing circuitry 110. The storage and processing circuit 110 may be a memory such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., as embodiments of the application are not limited. Processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the terminal 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the terminal 100, such as internet browsing applications, voice over internet protocol (Voice over Internet Protocol, VOIP) telephone call applications, email applications, media playing applications, operating system functions, and the like. Such software may be used to perform some control operations, such as image acquisition based on a camera, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functions implemented based on status indicators such as status indicators of light emitting diodes, touch event detection based on a touch sensor, functions associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functions, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in terminal 100, to name a few.
Terminal 100 may include input-output circuitry 150. The input-output circuit 150 is operable to enable the terminal 100 to input and output data, i.e., to allow the terminal 100 to receive data from an external device and also to allow the terminal 100 to output data from the terminal 100 to an external device. The input-output circuit 150 may further include a sensor 170. The sensor 170 may further include an ambient light sensor, a proximity sensor based on light and capacitance, a fingerprint recognition module, a touch sensor (e.g., based on an optical touch sensor and/or a capacitive touch sensor, where the touch sensor may be a part of a touch display screen or may be used independently as a touch sensor structure), an acceleration sensor, a camera, and other sensors, where the camera may be a front camera or a rear camera, and the fingerprint recognition module may be integrated below the display screen for collecting fingerprint images, where the fingerprint recognition module may be: optical fingerprint modules, and the like, are not limited herein. The front camera can be arranged below the front display screen, and the rear camera can be arranged below the rear display screen. Of course, the front camera or the rear camera may not be integrated with the display screen, and of course, in practical application, the front camera or the rear camera may also be a lifting structure, and the specific embodiment of the present application is not limited to the specific structure of the front camera or the rear camera.
The input-output circuit 150 may also include one or more displays, one of which may be disposed in front of the electronic device and another of which may be disposed behind the electronic device, such as display 130, in the case of multiple displays, such as 2 displays. The display 130 may include one or a combination of several of a liquid crystal display, a transparent display, an organic light emitting diode display, an electronic ink display, a plasma display, and a display using other display technologies. Display 130 may include an array of touch sensors (i.e., display 130 may be a touch-sensitive display). The touch sensor may be a capacitive touch sensor formed of an array of transparent touch sensor electrodes, such as Indium Tin Oxide (ITO) electrodes, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, etc., as embodiments of the application are not limited.
The terminal 100 may also include an audio component 140. Audio component 140 may be used to provide audio input and output functionality for terminal 100. The audio components 140 in the terminal 100 may include a speaker, microphone, buzzer, tone generator, and other components for generating and detecting sound.
The communication circuit 120 may be used to provide the terminal 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in the communication circuitry 120 may include radio frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving near field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communication circuit 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuit and antenna, and the like.
The terminal 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, levers, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes, and other status indicators, etc.
The user may control the operation of the terminal 100 by inputting commands through the input-output circuit 150, and may use output data of the input-output circuit 150 to enable receiving status information and other outputs from the terminal 100.
Referring to fig. 2, fig. 2 provides an information sharing method, which is performed by the UE shown in fig. 1 (may be a terminal shown in fig. 1, or may be other types of devices, such as a tablet computer, a computer, etc.), and the method shown in fig. 2 includes the following steps:
Step S201, after receiving a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction.
The implementation method of the step S201 specifically may include:
The UE acquires the sharing button of the target object (user) click view, and invokes a one-touch sharing function (i.e., receives a sharing indication of the target object). At this time, all picture information such as a shared title (title) and Stock class instance (Stock) is obtained.
Step S202, the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; and intercepting the key information area to obtain all key information pictures.
The key information picture in step S202 may be the picture information from which the redundant information is removed from all the picture information.
The UE identifying all the picture information to determine the key information area in all the picture information may specifically include:
The UE acquires a scene matched with the sharing instruction, extracts a key information set corresponding to the scene, identifies all the picture information, determines a plurality of picture information of all the picture information and a plurality of picture areas corresponding to the picture information, compares the picture information with the key information set, determines n picture information and n picture areas in the picture information matched with the key information set, determines the n picture areas as the key information area, and n is an integer greater than or equal to 1.
The above manner of identifying all the pictures can be AI identification, but other identification manners are also possible, for example, identification by a classifier, etc.
For example, in an alternative solution, the UE identifying all the picture information to determine the key information area in all the picture information may specifically include:
The UE identifies the target object, determines the first identity of the target object, and extracts first weight data corresponding to the first identity;
Dividing all the picture information into m picture areas by the UE according to a preset rule, respectively identifying the m picture areas to determine m keyword sets of the m picture areas, and obtaining m input data according to the input values of the keywords and the m keyword sets;
And respectively calculating m input data and the first weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results larger than a result threshold value in the m calculation results as key information areas.
As another example, in another alternative, the UE identifying all the picture information to determine the key information area in all the picture information may specifically include:
The UE identifies the target object, determines a first identity of the target object, predicts a sharing object according to the first identity and a scene corresponding to the sharing indication, and acquires second weight data of the first identity and the sharing object;
Dividing all the picture information into m picture areas by the UE according to a preset rule, respectively identifying the m picture areas to determine m keyword sets of the m picture areas, and obtaining m input data according to the input values of the keywords and the m keyword sets;
And respectively calculating m input data and the second weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results larger than a result threshold value in the m calculation results as key information areas.
Step 203, the UE splices the key information pictures to obtain a sharing picture, and shares the sharing picture to the sharing object.
The sharing of the sharing pictures to the sharing objects can be achieved in various manners, such as mail, instant messaging software (QQ, weChat or facebook), financial application program, and so on.
The UE stitching the key information picture to obtain the sharing picture may specifically include:
and the UE splices the key information pictures to obtain spliced pictures, identifies the key information pictures to obtain parameters of the key information pictures, processes the parameters according to preset rules to obtain processing results, and adds the processing results to the spliced pictures to obtain the sharing pictures.
In an optional solution, the processing the parameters according to a preset rule to obtain a processing result specifically includes:
And calculating the parameters according to a preset formula or a formula selected by the target object to obtain a processing result.
In an optional solution, the processing the parameters according to a preset rule to obtain a processing result specifically includes:
And calculating the parameters according to a preset formula or a formula selected by a target object to obtain a parameter statistical result, generating a chart corresponding to the parameter statistical result, and determining the parameter statistical result and the chart as the processing result.
The above graphs include, but are not limited to: asset distribution charts, comprehensive revenue charts, profitability charts, asset trend charts, and the like, which may be duty tables, circular distribution tables, matrix tables, and the like.
After receiving a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information areas to obtain all key information pictures; and the UE splices the key information pictures to obtain sharing pictures, and shares the sharing pictures to the sharing objects. According to the technical scheme, the direction picture comprises the key information picture, redundant information is removed, so that sharing of invalid information is avoided, and the experience of financial information sharing is improved.
The UE identifying the target object to determine the first identity of the target object may specifically include:
e1, acquiring a target face image of a target object;
E2, verifying the target face image;
and E3, when the target face image is verified, determining that the target object is a first identity corresponding to a preset face module.
In a specific implementation, a preset face template can be stored in the electronic equipment in advance, an original image of a target object can be obtained through the camera, and then the electronic equipment can determine the first identity of the target object when the target face image is successfully matched with the preset face template, otherwise, the first identity of the target object is not determined, so that the identity of the target object can be identified, and further whether the first identity is a reserved patient can be judged, and the remote medical treatment is prevented from being started by other people.
Further, in one possible example, the step E2 of verifying the target face image may include the following steps:
E21, carrying out region segmentation on the target face image to obtain a target face region, wherein the target face region is a region image of only a face;
e22, carrying out binarization processing on the target face area to obtain a binarized face image;
E23, dividing the binarized face image into a plurality of areas, wherein the areas of each area are the same and the size of each area is larger than a preset area value;
E24, extracting feature points of the binarized face image to obtain a plurality of feature points;
e25, determining the distribution density of the characteristic points corresponding to each region in the plurality of regions according to the plurality of characteristic points to obtain a plurality of distribution densities of the characteristic points;
E26, determining a target mean square error according to the distribution density of the plurality of characteristic points;
e27, determining a target quality evaluation value corresponding to the target mean square error according to a mapping relation between the preset mean square error and the quality evaluation value;
E28, when the target quality evaluation value is smaller than the preset quality evaluation value, performing image enhancement processing on the target face image, and matching the target face image subjected to the image enhancement processing with a preset face template to obtain a matching value;
And E29, when the matching value is larger than a preset threshold value, determining that the target face image is verified.
In a specific implementation, the preset threshold value and the preset area value can be set by a user or default by a system, and the preset face template can be stored in the electronic device in advance. The electronic device may acquire a region segmentation of the target face image to obtain a target face region, where the target face region may be a region that does not include a background but only includes a face, that is, a region image that is only a face. Furthermore, the target face region can be subjected to binarization processing to obtain a binarized face image, so that the complexity of the image can be reduced, the binarized face image is divided into a plurality of regions, and the area of each region is equal to or larger than a preset area value. Further, feature point extraction may be performed on the binarized face image to obtain a plurality of feature points, and the feature extraction algorithm may be at least one of the following: the scale invariant feature transform algorithm (SCALE INVARIANT feature transform, SIFT), SURF algorithm, pyramid algorithm, harris corner detection, etc., are not limited herein.
Further, the electronic device may determine the distribution density of the feature points corresponding to each of the plurality of regions according to the plurality of feature points, obtain the distribution density of the plurality of feature points, determine the target mean square error according to the distribution density of the plurality of feature points, store the mapping relationship between the preset mean square error and the quality evaluation value in advance in the electronic device, determine the target quality evaluation value corresponding to the target mean square error according to the mapping relationship between the preset mean square error and the quality evaluation value, and of course, the smaller the mean square error is, the larger the quality evaluation value is, and when the target quality evaluation value is larger than the preset quality evaluation value, the matching value between the target face image and the preset face template is larger than the preset threshold, so as to confirm that the target face image is verified, otherwise, confirm that the target face image is verified to fail.
Further, when the target quality evaluation value is smaller than the preset quality evaluation value, the terminal can perform image enhancement processing on the target face image, match the target face image after the image enhancement processing with a preset face template, and confirm that the target face image is verified and passed through when the matching value between the target face image and the preset face template is larger than the preset threshold, otherwise confirm that the target face image is failed to be verified.
Example 1
The application provides an information sharing method, which is executed by a smart phone, and as shown in fig. 3, the method comprises the following steps:
Step 301, after receiving a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction;
step S302, the UE draws and forms a head picture according to one picture in all picture information;
The implementation manner of the step S302 may specifically include: and obtaining a default background image of the head, scaling the background image in equal proportion so that the width is equal to the width of the mobile phone screen. A canvas area (UI Graphics Begin Image Context With Options) is created, the width and height being consistent with the size of the zoomed background image.
And drawing the background drawing to the canvas area through DRAW IN RECT interfaces of the UI Graphics. When the App is started, drawing a two-dimensional Code (QR Code) picture from the back end of the App shared by the pictures, and storing slogan texts in a cache. If the two-dimensional code is not pulled, default picture and text information is used. And obtaining a locally cached brand picture, a two-dimensional code picture, slogan a document and DRAW IN RECT a layer 1 to canvas area. Layer 2 is drawn to the canvas area according to the title document (title) entered at the time of sharing, DRAW IN RECT. And creating a quotation bar View (UI View), initializing the View according to the share incoming stock information, and acquiring the stock quotation information from a server during the initialization of the View. The layer (UI layer) content of the view render In Context is rendered to the layer 3 to canvas area. Different drawing is possible in layers 1, 2 and 3 according to the shared entry. Ending the drawing, the UI GRAPHICS END IMAGE Context generates a head picture from the canvas area, the head picture of which drawing can be as shown in FIG. 3a, one picture in which there is picture information can be as shown in FIG. 2a, and FIG. 3a removes redundant information in FIG. 2 a.
Step S303, the UE carries out identification on all pictures according to all picture information to determine key information areas in all picture information, and draws key information pictures;
The step of drawing the key information picture may specifically include: according to the clicked sharing button, finding a View (UI View) to be shared; and adjusting view contents by using a view layout method, and hiding contents irrelevant to sharing. Canvas area (UI Graphics Begin Image Context With Options) of the size required to create the view, the height at which the view is not visible is also calculated. The current layer (UI layer) content is rendered render In Context to the canvas area. The view content exceeds the viewable area, and the view is scrolled screen by screen and rendered multiple times by adjusting the starting point position (set Content Offset). And finishing drawing (UI GRAPHICS END IMAGE Context) and generating a key information picture. The key information picture is shown in fig. 3 b.
If all the picture information includes the asset analysis sharing picture, the step of drawing the key information picture may specifically include: the user enters an asset analysis page, and asset and income statistical data are pulled from a background service according to a default screening or a time and income ratio calculation formula actively selected by the user; when a user clicks the sharing button, calling a front-end layout method according to the data and the user information pulled above to generate a comprehensive information view of the yield, the comprehensive benefits and the asset trend; creating a canvas area (UI Graphics Begin Image Context With Options) of a size required for the view; the current layer (UI layer) content is rendered render In Context to the canvas area. The view content exceeds the viewable area, and the view is scrolled screen by screen and rendered multiple times by adjusting the starting point position (set Content Offset). Ending the drawing (UI GRAPHICS END IMAGE Context) results in a key information picture with asset analysis. The key information picture of the asset analysis is shown in fig. 3c (e.g., fig. 3c is a digital display) or fig. 3d (e.g., fig. 3d is a graphic display).
In step S304, the UE splices the header picture and the key information picture into a final sharing picture, and shares the sharing picture to the sharing object.
The sharing pictures can be shown in fig. 3e, fig. 3f or fig. 3 g.
After receiving a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information areas to obtain all key information pictures; and the UE splices the key information pictures to obtain sharing pictures, and shares the sharing pictures to the sharing objects. According to the technical scheme, the direction picture comprises the key information picture, redundant information is removed, so that sharing of invalid information is avoided, and the experience of financial information sharing is improved.
Referring to fig. 4, fig. 4 provides a user equipment, including:
A transceiver unit 401, configured to receive a sharing instruction of a target object;
an obtaining unit 402, configured to obtain all picture information of the sharing instruction;
A processing unit 403, configured to identify and determine a key information area in the all-picture information, where the key information area is a partial area in the all-picture information; intercepting the key information areas to obtain all key information pictures; and splicing the key information pictures to obtain sharing pictures, and sharing the sharing pictures to a sharing object.
The specific processing manner of the processing unit in the terminal shown in fig. 4 may be referred to the description of the embodiment shown in fig. 2, and will not be described herein.
Referring to fig. 5, fig. 5 is a device 50 according to an embodiment of the present application, where the device 50 includes a processor 501, a memory 502, and a communication interface 503, and the processor 501, the memory 502, and the communication interface 503 are connected to each other by a bus.
Memory 502 includes, but is not limited to, random access memory (random access memory, RAM), read-only memory (ROM), erasable programmable read-only memory (erasable programmable read only memory, EPROM), or portable read-only memory (compact disc read-only memory, CD-ROM), with memory 502 for associated computer programs and data. The communication interface 503 is used to receive and transmit data.
The processor 501 may be one or more central processing units (central processing unit, CPU), and in the case where the processor 501 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 501 in the device 50 is arranged to read the computer program code stored in said memory 502, performing the following operations:
After receiving a sharing instruction of a target object, acquiring all picture information of the sharing instruction;
identifying and determining key information areas in all the picture information, wherein the key information areas are partial areas in all the picture information; intercepting the key information areas to obtain all key information pictures;
And splicing the key information pictures to obtain sharing pictures, and sharing the sharing pictures to a sharing object.
The step of the UE splicing the key information pictures to obtain the sharing pictures specifically comprises the following steps:
And the UE adds a preset head background to the head position of the key information picture to splice to obtain the sharing picture.
In an alternative, the computer program code stored in the memory 502 may also perform the following operations:
acquiring a scene matched with the sharing instruction, extracting a key information set corresponding to the scene, identifying all the picture information to determine a plurality of picture information of all the picture information and a plurality of picture areas corresponding to the picture information, comparing the picture information with the key information set to determine n picture information and n picture areas in the picture information matched with the key information set, and determining the n picture areas as the key information areas, wherein n is an integer greater than or equal to 1.
In an alternative, the computer program code stored in the memory 502 may also perform the following operations:
Identifying the target object, determining a first identity of the target object, and extracting first weight data corresponding to the first identity;
Dividing all the picture information into m picture areas according to a preset rule, respectively identifying the m picture areas to determine m keyword sets of the m picture areas, and obtaining m input data according to the input values of the keywords and the m keyword sets;
And respectively calculating m input data and the first weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results larger than a result threshold value in the m calculation results as key information areas.
In an alternative, the computer program code stored in the memory 502 may also perform the following operations:
The UE identifies the target object, determines a first identity of the target object, predicts a sharing object according to the first identity and a scene corresponding to the sharing indication, and acquires second weight data of the first identity and the sharing object;
Dividing all the picture information into m picture areas by the UE according to a preset rule, respectively identifying the m picture areas to determine m keyword sets of the m picture areas, and obtaining m input data according to the input values of the keywords and the m keyword sets;
And respectively calculating m input data and the second weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results larger than a result threshold value in the m calculation results as key information areas.
In an alternative, the computer program code stored in the memory 502 may also perform the following operations:
and the UE splices the key information pictures to obtain spliced pictures, identifies the key information pictures to obtain parameters of the key information pictures, processes the parameters according to preset rules to obtain processing results, and adds the processing results to the spliced pictures to obtain the sharing pictures.
In an alternative, the computer program code stored in the memory 502 may also perform the following operations:
And calculating the parameters according to a preset formula or a formula selected by the target object to obtain a processing result.
In an alternative, the computer program code stored in the memory 502 may also perform the following operations:
And calculating the parameters according to a preset formula or a formula selected by a target object to obtain a parameter statistical result, generating a chart corresponding to the parameter statistical result, and determining the parameter statistical result and the chart as the processing result.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, which when run on a network device, implements the method flow shown in fig. 2.
Embodiments of the present application also provide a computer program product, which when run on a terminal, implements the method flow shown in fig. 2.
The embodiment of the application also provides a terminal comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of the embodiment shown in fig. 2.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software templates for performing the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred, and that the acts and templates referred to are not necessarily essential to the application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, comprising several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a usb disk, a read-only memory (ROM), a random access memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-only memory (ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the application, wherein the principles and embodiments of the application are explained in detail using specific examples, the above examples being provided solely to facilitate the understanding of the method and core concepts of the application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (7)

1. An information sharing method, which is characterized in that the method is applied to user equipment UE, comprises the following steps:
after receiving the sharing indication of the target object, the UE acquires all picture information of the sharing indication;
The UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information areas to obtain all key information pictures;
The UE splices the key information pictures to obtain sharing pictures, and shares the sharing pictures to a sharing object;
the UE identifying all the picture information to determine a key information area in all the picture information specifically includes:
The UE acquires a scene matched with the sharing instruction, extracts a key information set corresponding to the scene, identifies all the picture information, determines a plurality of picture information of all the picture information and a plurality of picture areas corresponding to the picture information, compares the picture information with the key information set, determines n picture information and n picture areas in the picture information matched with the key information set, determines the n picture areas as the key information area, and n is an integer greater than or equal to 1.
2. The method of claim 1, wherein the UE stitching the key information picture to obtain a shared picture specifically comprises:
And the UE adds a preset head background to the head position of the key information picture to splice to obtain the sharing picture.
3. The method of claim 1, wherein the UE stitching the key information picture to obtain a shared picture specifically comprises:
and the UE splices the key information pictures to obtain spliced pictures, identifies the key information pictures to obtain parameters of the key information pictures, processes the parameters according to preset rules to obtain processing results, and adds the processing results to the spliced pictures to obtain the sharing pictures.
4. The method of claim 3, wherein the processing the parameters according to a preset rule to obtain a processing result specifically includes:
And calculating the parameters according to a preset formula or a formula selected by the target object to obtain a processing result.
5. The method of claim 3, wherein the processing the parameters according to a preset rule to obtain a processing result specifically includes:
And calculating the parameters according to a preset formula or a formula selected by a target object to obtain a parameter statistical result, generating a chart corresponding to the parameter statistical result, and determining the parameter statistical result and the chart as the processing result.
6. A user device, the user device comprising:
the receiving and transmitting unit is used for receiving the sharing instruction of the target object;
the acquisition unit is used for acquiring all the picture information of the sharing indication;
The processing unit is used for identifying and determining key information areas in all the picture information, wherein the key information areas are partial areas in all the picture information; intercepting the key information areas to obtain all key information pictures; splicing the key information pictures to obtain sharing pictures, and sharing the sharing pictures to a sharing object;
Identifying all the picture information to determine key information areas in all the picture information specifically comprises the following steps:
acquiring a scene matched with the sharing instruction, extracting a key information set corresponding to the scene, identifying all the picture information to determine a plurality of picture information of all the picture information and a plurality of picture areas corresponding to the picture information, comparing the picture information with the key information set to determine n picture information and n picture areas in the picture information matched with the key information set, and determining the n picture areas as the key information areas, wherein n is an integer greater than or equal to 1.
7. A computer readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of claims 1-5.
CN202010753185.9A 2020-07-30 2020-07-30 Information sharing method and related product Active CN111932455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010753185.9A CN111932455B (en) 2020-07-30 2020-07-30 Information sharing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010753185.9A CN111932455B (en) 2020-07-30 2020-07-30 Information sharing method and related product

Publications (2)

Publication Number Publication Date
CN111932455A CN111932455A (en) 2020-11-13
CN111932455B true CN111932455B (en) 2024-04-19

Family

ID=73314904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010753185.9A Active CN111932455B (en) 2020-07-30 2020-07-30 Information sharing method and related product

Country Status (1)

Country Link
CN (1) CN111932455B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893412A (en) * 2015-11-24 2016-08-24 乐视致新电子科技(天津)有限公司 Image sharing method and apparatus
WO2017050161A1 (en) * 2015-09-22 2017-03-30 阿里巴巴集团控股有限公司 Picture sharing method and device
WO2017113873A1 (en) * 2015-12-28 2017-07-06 努比亚技术有限公司 Image synthesizing method, device and computer storage medium
CN110825988A (en) * 2019-11-08 2020-02-21 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017050161A1 (en) * 2015-09-22 2017-03-30 阿里巴巴集团控股有限公司 Picture sharing method and device
CN105893412A (en) * 2015-11-24 2016-08-24 乐视致新电子科技(天津)有限公司 Image sharing method and apparatus
WO2017113873A1 (en) * 2015-12-28 2017-07-06 努比亚技术有限公司 Image synthesizing method, device and computer storage medium
CN110825988A (en) * 2019-11-08 2020-02-21 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment

Also Published As

Publication number Publication date
CN111932455A (en) 2020-11-13

Similar Documents

Publication Publication Date Title
CN109003194B (en) Comment sharing method, terminal and storage medium
CN104021398A (en) Wearable intelligent device and method for assisting identity recognition
CN111209377B (en) Text processing method, device, equipment and medium based on deep learning
CN111881813B (en) Data storage method and system of face recognition terminal
CN109753202B (en) Screen capturing method and mobile terminal
CN112464052A (en) Feedback information processing method, feedback information display device and electronic equipment
CN111984884A (en) Non-contact data acquisition method and device for large database
CN112533072A (en) Image sending method and device and electronic equipment
CN111258692A (en) Filling method, device and equipment of remark information and storage medium
WO2022268023A1 (en) Fingerprint recognition method and apparatus, and electronic device and readable storage medium
CN110634095B (en) Watermark adding method, watermark identifying device and electronic equipment
CN107450811A (en) Touch area amplification display method and system
CN111401981B (en) Bidding method, device and storage medium of bidding cloud host
CN116994272A (en) Identification method and device for target picture
CN111932455B (en) Information sharing method and related product
CN111444314A (en) Information processing method and electronic equipment
CN111126996A (en) Image display method and terminal equipment
WO2022012595A1 (en) Order generation method for software interface and system
CN109948095B (en) Method, device, terminal and storage medium for displaying webpage content
CN106776634A (en) A kind of method for network access, device and terminal device
CN115330522A (en) Credit card approval method and device based on clustering, electronic equipment and medium
CN111353422B (en) Information extraction method and device and electronic equipment
CN111899042B (en) Malicious exposure advertisement behavior detection method and device, storage medium and terminal
CN112991491B (en) Method and device for time-sharing display of data, electronic equipment and storage medium
CN113609368B (en) Query result display method, device, equipment and storage medium of game account

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant