CN111932455A - Information sharing method and related product - Google Patents

Information sharing method and related product Download PDF

Info

Publication number
CN111932455A
CN111932455A CN202010753185.9A CN202010753185A CN111932455A CN 111932455 A CN111932455 A CN 111932455A CN 202010753185 A CN202010753185 A CN 202010753185A CN 111932455 A CN111932455 A CN 111932455A
Authority
CN
China
Prior art keywords
picture
information
key information
pictures
sharing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010753185.9A
Other languages
Chinese (zh)
Other versions
CN111932455B (en
Inventor
李思龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Futu Network Technology Co Ltd
Original Assignee
Shenzhen Futu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Futu Network Technology Co Ltd filed Critical Shenzhen Futu Network Technology Co Ltd
Priority to CN202010753185.9A priority Critical patent/CN111932455B/en
Publication of CN111932455A publication Critical patent/CN111932455A/en
Application granted granted Critical
Publication of CN111932455B publication Critical patent/CN111932455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides an information sharing method and a related product, wherein the method comprises the following steps: after the UE receives a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object. The technical scheme that this application provided has the advantage that improves financial information sharing experience degree.

Description

Information sharing method and related product
Technical Field
The present application relates to the field of electronics and information technologies, and in particular, to an information sharing method and related products.
Background
Information sharing refers to communication and sharing of information and information products among information systems of different levels and different departments, namely, sharing the information, which is a resource with more and more obvious importance in the internet era, with other people together so as to achieve the purposes of resource allocation more reasonably, social cost saving and creation of more wealth. The method is an important means for improving the utilization rate of information resources and avoiding repeated waste in information acquisition, storage and management. Compared with common information sharing, the key and timeliness of the financial information sharing have higher requirements, but the existing financial information sharing has more invalid (junk) information, cannot meet the sharing requirements of the financial information, and influences the experience degree of a user on the financial information sharing.
Disclosure of Invention
The embodiment of the application discloses an information sharing method and a related product, which can meet the sharing requirement of financial information and improve the experience degree of a user on the financial information sharing.
In a first aspect, an information sharing method is provided, where the method is applied to a user equipment UE, and the method includes the following steps:
after the UE receives a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction;
the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures;
and the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object.
In a second aspect, a user equipment is provided, the user equipment comprising:
the receiving and sending unit is used for receiving a sharing instruction of the target object;
the acquisition unit is used for acquiring all picture information of the sharing instruction;
the processing unit is used for identifying all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and splicing the key information pictures to obtain a sharing picture, and sharing the sharing picture to a sharing object.
In a third aspect, there is provided a terminal comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps of the method of the first aspect.
A fourth aspect of embodiments of the present application discloses a computer-readable storage medium, which is characterized by storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the method of the first aspect.
A fifth aspect of embodiments of the present application discloses a computer program product, wherein the computer program product comprises a non-transitory computer-readable storage medium storing a computer program, the computer program being operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, after receiving the sharing instruction of the target object, the UE acquires all picture information of the sharing instruction; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object. According to the technical scheme, the direction picture comprises the key information picture, redundant information is removed, invalid information sharing is avoided, and experience degree of financial information sharing is improved.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an information sharing method according to an embodiment of the present disclosure;
fig. 2a is a schematic diagram of original picture information provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of an information sharing method according to an embodiment of the present disclosure;
FIG. 3a is a schematic diagram of a header picture according to an embodiment of the present application;
fig. 3b is a schematic diagram of a key information picture according to an embodiment of the present application;
fig. 3c is a schematic diagram of another key information picture provided in the first embodiment of the present application;
fig. 3d is a schematic diagram of another key information picture provided in the first embodiment of the present application;
fig. 3e is a schematic diagram of a shared picture according to an embodiment of the present application;
fig. 3f is a schematic view of another shared picture according to an embodiment of the present application;
fig. 3g is a schematic view of another shared picture provided in the first embodiment of the present application;
fig. 4 is a schematic structural diagram of a user equipment according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
The term "and/or" in this application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more. The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application. The term "connect" in the embodiments of the present application refers to various connection manners, such as direct connection or indirect connection, to implement communication between devices, which is not limited in this embodiment of the present application.
A terminal in the embodiments of the present application may refer to various forms of UE, access terminal, subscriber unit, subscriber station, mobile station, MS (mobile station), remote station, remote terminal, mobile device, user terminal, terminal device (terminal equipment), wireless communication device, user agent, or user equipment. The terminal device may also be a cellular phone, a cordless phone, an SIP (session initiation protocol) phone, a WLL (wireless local loop) station, a PDA (personal digital assistant) with a wireless communication function, a handheld device with a wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved PLMN (public land mobile network, chinese), and the like, which are not limited in this embodiment.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal disclosed in an embodiment of the present application, where the terminal 100 may be a user equipment UE, the terminal 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, the sensor 170 may include a camera, a distance sensor, a gravity sensor, and the like, the electronic device may include two transparent display screens, the transparent display screens are disposed on a back side and a front side of the electronic device, and part or all of components between the two transparent display screens may also be transparent, so that the electronic device may be a transparent electronic device in terms of visual effect, and if part of the components are transparent, the electronic device may be a hollow electronic device. Wherein:
the terminal 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may be a memory, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), etc., and the embodiments of the present application are not limited thereto. Processing circuitry in the storage and processing circuitry 110 may be used to control the operation of the terminal 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the terminal 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functionality, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the terminal 100, to name a few, embodiments of the present application are not limited.
The terminal 100 may include an input-output circuit 150. The input-output circuit 150 may be used to enable the terminal 100 to input and output data, i.e., to allow the terminal 100 to receive data from external devices and also to allow the terminal 100 to output data from the terminal 100 to external devices. The input-output circuit 150 may further include a sensor 170. Sensor 170 vein identification module, can also include ambient light sensor, proximity sensor based on light and electric capacity, fingerprint identification module, touch sensor (for example, based on light touch sensor and/or capacitanc touch sensor, wherein, touch sensor can be touch-control display screen's partly, also can regard as a touch sensor structure independent utility), acceleration sensor, the camera, and other sensors etc. the camera can be leading camera or rear camera, the fingerprint identification module can integrate in the display screen below, be used for gathering the fingerprint image, the fingerprint identification module can be: optical fingerprint module, etc., and is not limited herein. The front camera can be arranged below the front display screen, and the rear camera can be arranged below the rear display screen. Of course, the front camera or the rear camera may not be integrated with the display screen, and certainly in practical applications, the front camera or the rear camera may also be in a lifting structure, and the specific structure of the front camera or the rear camera is not limited in the specific embodiments of the present application.
Input-output circuit 150 may also include one or more display screens, and when multiple display screens are provided, such as 2 display screens, one display screen may be provided on the front of the electronic device and another display screen may be provided on the back of the electronic device, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, transparent display, organic light emitting diode display, electronic ink display, plasma display, and display using other display technologies. The display screen 130 may include an array of touch sensors (i.e., the display screen 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The terminal 100 can also include an audio component 140. Audio component 140 may be used to provide audio input and output functionality for terminal 100. The audio components 140 in the terminal 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 can be used to provide the terminal 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The terminal 100 may further include a battery, a power management circuit, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control operation of terminal 100 and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from terminal 100.
Referring to fig. 2, fig. 2 provides an information sharing method, which is executed by the UE (which may be the terminal shown in fig. 1, but may also be other types of devices, such as a tablet computer, a computer, etc.) shown in fig. 1, and the method shown in fig. 2 includes the following steps:
step S201, after receiving a sharing instruction of a target object, the UE acquires all picture information associated with the sharing instruction.
The implementation method of the step S201 may specifically include:
the UE acquires a sharing button of a target object (user) clicked on a view, and invokes a one-touch sharing function (i.e., receives a sharing instruction of the target object). All picture information such as a shared title (title), a Stock class instance (Stock) and the like is obtained at this time.
Step S202, the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; and intercepting the key information area to obtain all key information pictures.
The key information picture in step S202 may be picture information obtained by removing redundant information from all picture information.
The identifying, by the UE, the all picture information to determine the key information area in the all picture information may specifically include:
the UE acquires a scene matched with the sharing indication, extracts a key information set corresponding to the scene, identifies all picture information to determine a plurality of picture information of all picture information and a plurality of picture areas corresponding to the plurality of picture information, compares the plurality of picture information with the key information set to determine n picture information and n picture areas in the plurality of picture information matched with the key information set, and determines the n picture areas as the key information areas, wherein n is an integer greater than or equal to 1.
The above-mentioned identification method for all pictures may be AI identification, but may also be other identification methods, for example, identification by a classifier, and the like.
For example, in an optional scheme, the identifying, by the UE, the all picture information to determine the key information area in the all picture information may specifically include:
the UE identifies the target object, determines a first identity of the target object, and extracts first weight data corresponding to the first identity;
the UE divides all the picture information into m picture areas according to a preset rule, the m picture areas are respectively identified to determine m keyword sets of the m picture areas, and m input data are obtained according to input values of keywords and the m keyword sets;
and respectively calculating the m input data and the first weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results which are greater than a result threshold value in the m calculation results as key information areas.
For another example, in another optional scheme, the identifying, by the UE, the all picture information to determine the key information area in the all picture information may specifically include:
the UE identifies the target object to determine a first identity of the target object, predicts a shared object according to the first identity and a scene corresponding to the sharing indication, and acquires second weight data of the first identity and the shared object;
the UE divides all the picture information into m picture areas according to a preset rule, the m picture areas are respectively identified to determine m keyword sets of the m picture areas, and m input data are obtained according to input values of keywords and the m keyword sets;
and respectively calculating the m input data and the second weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results which are greater than a result threshold value in the m calculation results as key information areas.
Step S203, the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object.
The sharing of the shared picture to the sharing object may be achieved through various manners, for example, through mails, instant messaging software (QQ, wechat, or facebook), financial applications, and the like.
The splicing of the key information picture by the UE to obtain the shared picture may specifically include:
and the UE splices the key information pictures to obtain spliced pictures, identifies the key information pictures to obtain parameters of the key information pictures, processes the parameters according to a preset rule to obtain a processing result, and adds the processing result to the spliced pictures to obtain the shared pictures.
In an optional scheme, the processing the parameter according to a preset rule to obtain a processing result specifically includes:
and calculating the parameters according to a preset formula or a formula selected by the target object to obtain a processing result.
In an optional scheme, the processing the parameter according to a preset rule to obtain a processing result specifically includes:
and calculating the parameters according to a preset formula or a formula selected by the target object to obtain a parameter statistical result, generating a chart corresponding to the parameter statistical result, and determining the parameter statistical result and the chart as the processing result.
The above charts include, but are not limited to: asset distribution charts, integrated revenue charts, profitability charts, asset trend charts, and the like, which may be proportion charts, circular distribution charts, matrix charts, and the like.
According to the technical scheme, after receiving a sharing instruction of a target object, UE acquires all picture information of the sharing instruction; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object. According to the technical scheme, the direction picture comprises the key information picture, redundant information is removed, invalid information sharing is avoided, and experience degree of financial information sharing is improved.
The identifying, by the UE, the target object and determining the first identity of the target object may specifically include:
e1, acquiring a target face image of a target image of the target object;
e2, verifying the target face image;
e3, when the target face image passes the verification, determining that the target object is a first identity corresponding to a preset face module.
In the specific implementation, a preset face template can be stored in the electronic device in advance, the original image of the target object can be obtained through the camera, and then the first identity of the target object can be determined when the target face image is successfully matched with the preset face template by the electronic device, otherwise, the first identity of the target object is not determined, so that the identity of the target object can be identified, whether the first identity is a reserved patient or not can be judged, and the fact that other people start telemedicine is avoided.
Further, in a possible example, in the step E2, the verifying the target face image may include the following steps:
e21, performing region segmentation on the target face image to obtain a target face region, wherein the target face region is a region image only of a face;
e22, performing binarization processing on the target face area to obtain a binarized face image;
e23, dividing the binary face image into a plurality of regions, wherein the areas of the regions are the same and the area size is larger than a preset area value;
e24, extracting the characteristic points of the binary face image to obtain a plurality of characteristic points;
e25, determining the distribution density of the feature points corresponding to each of the plurality of areas according to the plurality of feature points to obtain a plurality of distribution densities of the feature points;
e26, determining a target mean square error according to the distribution densities of the plurality of feature points;
e27, determining a target quality evaluation value corresponding to the target mean square error according to a preset mapping relation between the mean square error and the quality evaluation value;
e28, when the target quality evaluation value is smaller than the preset quality evaluation value, performing image enhancement processing on the target face image, and matching the target face image subjected to the image enhancement processing with a preset face template to obtain a matching value;
e29, when the matching value is larger than a preset threshold value, determining that the target face image is verified.
In specific implementation, the preset threshold and the preset area value can be set by a user or default by a system, and the preset face template can be stored in the electronic device in advance. The electronic device may obtain a region segmentation of the target face image to obtain a target face region, where the target face region may be a region that does not include a background but only includes a face, that is, a region image of only a face. And then, can carry out binarization processing to target face region, obtain two quantification face image, so, can reduce the image complexity, divide two quantification face image into a plurality of regions, the area size of each region is equal, and is greater than preset area value. Further, feature point extraction may be performed on the binarized face image to obtain a plurality of feature points, and an algorithm of the feature extraction may be at least one of the following: scale Invariant Feature Transform (SIFT), SURF, pyramid, harris corner detection, and the like, without limitation.
Further, the electronic device may determine, according to the plurality of feature points, a feature point distribution density corresponding to each of the plurality of regions to obtain a plurality of feature point distribution densities, and determine a target mean square error according to the plurality of feature point distribution densities, the electronic device may pre-store a mapping relationship between a preset mean square error and a quality evaluation value, and determine, according to the mapping relationship between the preset mean square error and the quality evaluation value, a target quality evaluation value corresponding to the target mean square error, where the smaller the mean square error is, the larger the quality evaluation value is, when the target quality evaluation value is greater than the preset quality evaluation value, directly match the target face image with a preset face template, and when a matching value therebetween is greater than a preset threshold, determine that the target face image is verified, and otherwise, determine that the target face image is verified.
Further, when the target quality evaluation value is smaller than the preset quality evaluation value, the terminal may perform image enhancement processing on the target face image, match the target face image after the image enhancement processing with the preset face template, and determine that the target face image passes verification if the matching value between the target face image and the preset face template is larger than a preset threshold value, otherwise, determine that the target face image fails verification.
Example one
The application provides an information sharing method, which is executed by a smart phone and is shown in fig. 3, and the method comprises the following steps:
step S301, after receiving a sharing instruction of a target object, UE acquires all picture information of the sharing instruction;
step S302, the UE draws and forms a head picture according to one picture in all picture information;
the implementation manner of the step S302 may specifically include: and obtaining a default background image of the head, and scaling the background image to make the width equal to the width of the mobile phone screen. A canvas area (UI Graphics Begin Image Context With Options) is created With width and height consistent With the scaled background map size.
And drawing the background picture to the canvas area through a draw In Rect interface of UI Graphics. When the App is started, a two-dimensional Code (QR Code) picture and a slogan file plan are pulled from the back end of the picture-sharing App and stored in a cache. And if the two-dimensional code is failed to be pulled, using default picture and file information. And obtaining a brand picture, a two-dimensional code picture, a slogan file, and a draw In Rect drawing layer 1 of the local cache to a canvas area. The draw In Rect draws layer 2 to the canvas area according to the title copy (title) that was In when sharing. And establishing a quote bar View (UI View), initializing the View according to the incoming stock information during sharing, and acquiring the stock quote information from the server side during initializing the View. Render the Layer (UI Layer) content, render In Context, of the view to the Layer 3 to canvas area. Depending on the shared entry, there may be different renderings in tier 1, tier 2, and tier 3. And finishing the drawing, wherein the UI Graphics End Image Context generates a header picture from the canvas area, and the drawn header picture can be as shown in FIG. 3a, one picture with picture information can be as shown in FIG. 2a, and the redundant information in FIG. 2a is removed from FIG. 3 a.
Step S303, the UE identifies all pictures according to all picture information to determine key information areas in all picture information, and draws key information pictures;
the step of drawing the key information picture may specifically include: finding a View (UI View) to be shared according to the clicked sharing button; and adjusting the view content by using a view layout method, and hiding the content irrelevant to sharing. The canvas area (UI Graphics Begin Image Context With Options) of the size required to create the view, the height at which the view is not visible is also calculated. And rendering (render In Context) the contents of the current Layer (UI Layer) of the view to the canvas area. The view is scrolled screen by screen for rendering multiple times by adjusting the start point position (set Content Offset) for the view Content beyond the visible area. And finishing the drawing (UI Graphics End Image Context) and generating a key information picture. The key information picture is shown in fig. 3 b.
If all the picture information includes the asset analysis sharing picture, the step of drawing the key information picture may specifically include: a user enters an asset analysis page, and asset and income statistical data are pulled from background service according to a default screening or time and income rate calculation formula actively selected by the user; when a user clicks a sharing button, calling a front-end layout method according to the data and user information pulled from the sharing button, and generating a comprehensive information view of the profitability, the comprehensive profits and the asset tendency; a canvas area (UI Graphics Begin Image Context With Options) of the size required to create the view; and rendering (render In Context) the contents of the current Layer (UI Layer) of the view to the canvas area. The view is scrolled screen by screen for rendering multiple times by adjusting the start point position (set Content Offset) for the view Content beyond the visible area. And finishing the drawing (UI Graphics End Image Context) to obtain the key information picture with the asset analysis. The key information picture of the asset analysis is shown in fig. 3c (shown in fig. 3c as a digital display) or fig. 3d (shown in fig. 3d as a graph display).
Step S304, the UE splices the head picture and the key information picture into a final sharing picture, and shares the sharing picture with the sharing object.
The shared picture can be as shown in fig. 3e, fig. 3f or fig. 3 g.
According to the technical scheme, after receiving a sharing instruction of a target object, UE acquires all picture information of the sharing instruction; the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object. According to the technical scheme, the direction picture comprises the key information picture, redundant information is removed, invalid information sharing is avoided, and experience degree of financial information sharing is improved.
Referring to fig. 4, fig. 4 provides a user equipment including:
a transceiver unit 401, configured to receive a sharing instruction of a target object;
an obtaining unit 402, configured to obtain all picture information associated with the sharing instruction;
a processing unit 403, configured to identify all the picture information and determine a key information area in all the picture information, where the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and splicing the key information pictures to obtain a sharing picture, and sharing the sharing picture to a sharing object.
The specific processing manner of the processing unit in the terminal shown in fig. 4 may refer to the description of the embodiment shown in fig. 2, which is not described herein again.
Referring to fig. 5, fig. 5 is a device 50 provided in an embodiment of the present application, where the device 50 includes a processor 501, a memory 502, and a communication interface 503, and the processor 501, the memory 502, and the communication interface 503 are connected to each other through a bus.
The memory 502 includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), and the memory 502 is used for related computer programs and data. The communication interface 503 is used to receive and transmit data.
The processor 501 may be one or more Central Processing Units (CPUs), and in the case that the processor 501 is one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The processor 501 in the device 50 is adapted to read the computer program code stored in said memory 502 and to perform the following operations:
after receiving a sharing instruction of a target object, acquiring all picture information of the sharing instruction;
identifying all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures;
and splicing the key information pictures to obtain a sharing picture, and sharing the sharing picture to a sharing object.
The step of splicing the key information pictures by the UE to obtain shared pictures specifically includes:
and the UE adds a preset head background to the head position of the key information picture and splices to obtain the shared picture.
In an alternative, the computer program code stored in the memory 502 may further perform the following operations:
the method comprises the steps of obtaining a scene matched with the sharing indication, extracting a key information set corresponding to the scene, identifying all picture information to determine a plurality of picture information of all picture information and a plurality of picture areas corresponding to the plurality of picture information, comparing the plurality of picture information with the key information set to determine n picture information and n picture areas in the plurality of picture information matched with the key information set, and determining the n picture areas as the key information areas, wherein n is an integer greater than or equal to 1.
In an alternative, the computer program code stored in the memory 502 may further perform the following operations:
identifying the target object, determining a first identity of the target object, and extracting first weight data corresponding to the first identity;
dividing all the picture information into m picture areas according to a preset rule, identifying the m picture areas respectively to determine m keyword sets of the m picture areas, and obtaining m input data according to input values of keywords and the m keyword sets;
and respectively calculating the m input data and the first weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results which are greater than a result threshold value in the m calculation results as key information areas.
In an alternative, the computer program code stored in the memory 502 may further perform the following operations:
the UE identifies the target object to determine a first identity of the target object, predicts a shared object according to the first identity and a scene corresponding to the sharing indication, and acquires second weight data of the first identity and the shared object;
the UE divides all the picture information into m picture areas according to a preset rule, the m picture areas are respectively identified to determine m keyword sets of the m picture areas, and m input data are obtained according to input values of keywords and the m keyword sets;
and respectively calculating the m input data and the second weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results which are greater than a result threshold value in the m calculation results as key information areas.
In an alternative, the computer program code stored in the memory 502 may further perform the following operations:
and the UE splices the key information pictures to obtain spliced pictures, identifies the key information pictures to obtain parameters of the key information pictures, processes the parameters according to a preset rule to obtain a processing result, and adds the processing result to the spliced pictures to obtain the shared pictures.
In an alternative, the computer program code stored in the memory 502 may further perform the following operations:
and calculating the parameters according to a preset formula or a formula selected by the target object to obtain a processing result.
In an alternative, the computer program code stored in the memory 502 may further perform the following operations:
and calculating the parameters according to a preset formula or a formula selected by the target object to obtain a parameter statistical result, generating a chart corresponding to the parameter statistical result, and determining the parameter statistical result and the chart as the processing result.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a network device, the method flow shown in fig. 2 is implemented.
An embodiment of the present application further provides a computer program product, and when the computer program product runs on a terminal, the method flow shown in fig. 2 is implemented.
Embodiments of the present application also provide a terminal including a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs including instructions for performing the steps in the method of the embodiment shown in fig. 2.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It will be appreciated that the electronic device, in order to carry out the functions described above, may comprise corresponding hardware structures and/or software templates for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no acts or templates referred to are necessarily required by the application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An information sharing method is applied to User Equipment (UE), and comprises the following steps:
after the UE receives a sharing instruction of a target object, the UE acquires all picture information of the sharing instruction;
the UE identifies all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures;
and the UE splices the key information pictures to obtain shared pictures, and shares the shared pictures to a shared object.
2. The method according to claim 1, wherein the step of the UE splicing the key information pictures to obtain shared pictures specifically includes:
and the UE adds a preset head background to the head position of the key information picture and splices to obtain the shared picture.
3. The method according to claim 1 or 2, wherein the identifying, by the UE, the all-picture information to determine the key information region in the all-picture information specifically comprises:
the UE acquires a scene matched with the sharing indication, extracts a key information set corresponding to the scene, identifies all picture information to determine a plurality of picture information of all picture information and a plurality of picture areas corresponding to the plurality of picture information, compares the plurality of picture information with the key information set to determine n picture information and n picture areas in the plurality of picture information matched with the key information set, and determines the n picture areas as the key information areas, wherein n is an integer greater than or equal to 1.
4. The method according to claim 1 or 2, wherein the identifying, by the UE, the all-picture information to determine the key information region in the all-picture information specifically comprises:
the UE identifies the target object, determines a first identity of the target object, and extracts first weight data corresponding to the first identity;
the UE divides all the picture information into m picture areas according to a preset rule, the m picture areas are respectively identified to determine m keyword sets of the m picture areas, and m input data are obtained according to input values of keywords and the m keyword sets;
and respectively calculating m input data and the first weight data to obtain m calculation results, determining w pictures corresponding to w calculation results which are greater than a result threshold value in the m calculation results as key information areas, wherein m is an integer greater than or equal to 2.
5. The method according to claim 1 or 2, wherein the identifying, by the UE, the all-picture information to determine the key information region in the all-picture information specifically comprises:
the UE identifies the target object to determine a first identity of the target object, predicts a shared object according to the first identity and a scene corresponding to the sharing indication, and acquires second weight data of the first identity and the shared object;
the UE divides all the picture information into m picture areas according to a preset rule, the m picture areas are respectively identified to determine m keyword sets of the m picture areas, and m input data are obtained according to input values of keywords and the m keyword sets;
and respectively calculating the m input data and the second weight data to obtain m calculation results, and determining w pictures corresponding to w calculation results which are greater than a result threshold value in the m calculation results as key information areas.
6. The method according to claim 1, wherein the step of the UE splicing the key information pictures to obtain shared pictures specifically includes:
and the UE splices the key information pictures to obtain spliced pictures, identifies the key information pictures to obtain parameters of the key information pictures, processes the parameters according to a preset rule to obtain a processing result, and adds the processing result to the spliced pictures to obtain the shared pictures.
7. The method according to claim 6, wherein the processing the parameter according to the preset rule to obtain the processing result specifically comprises:
and calculating the parameters according to a preset formula or a formula selected by the target object to obtain a processing result.
8. The method according to claim 6, wherein the processing the parameter according to the preset rule to obtain the processing result specifically comprises:
and calculating the parameters according to a preset formula or a formula selected by the target object to obtain a parameter statistical result, generating a chart corresponding to the parameter statistical result, and determining the parameter statistical result and the chart as the processing result.
9. A user equipment, the user equipment comprising:
the receiving and sending unit is used for receiving a sharing instruction of the target object;
the acquisition unit is used for acquiring all picture information of the sharing instruction;
the processing unit is used for identifying all the picture information to determine a key information area in all the picture information, wherein the key information area is a partial area in all the picture information; intercepting the key information area to obtain all key information pictures; and splicing the key information pictures to obtain a sharing picture, and sharing the sharing picture to a sharing object.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-8.
CN202010753185.9A 2020-07-30 2020-07-30 Information sharing method and related product Active CN111932455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010753185.9A CN111932455B (en) 2020-07-30 2020-07-30 Information sharing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010753185.9A CN111932455B (en) 2020-07-30 2020-07-30 Information sharing method and related product

Publications (2)

Publication Number Publication Date
CN111932455A true CN111932455A (en) 2020-11-13
CN111932455B CN111932455B (en) 2024-04-19

Family

ID=73314904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010753185.9A Active CN111932455B (en) 2020-07-30 2020-07-30 Information sharing method and related product

Country Status (1)

Country Link
CN (1) CN111932455B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105893412A (en) * 2015-11-24 2016-08-24 乐视致新电子科技(天津)有限公司 Image sharing method and apparatus
WO2017050161A1 (en) * 2015-09-22 2017-03-30 阿里巴巴集团控股有限公司 Picture sharing method and device
WO2017113873A1 (en) * 2015-12-28 2017-07-06 努比亚技术有限公司 Image synthesizing method, device and computer storage medium
CN110825988A (en) * 2019-11-08 2020-02-21 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017050161A1 (en) * 2015-09-22 2017-03-30 阿里巴巴集团控股有限公司 Picture sharing method and device
CN105893412A (en) * 2015-11-24 2016-08-24 乐视致新电子科技(天津)有限公司 Image sharing method and apparatus
WO2017113873A1 (en) * 2015-12-28 2017-07-06 努比亚技术有限公司 Image synthesizing method, device and computer storage medium
CN110825988A (en) * 2019-11-08 2020-02-21 北京字节跳动网络技术有限公司 Information display method and device and electronic equipment

Also Published As

Publication number Publication date
CN111932455B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN111464716B (en) Certificate scanning method, device, equipment and storage medium
CN107729889B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107944414B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110969056B (en) Document layout analysis method, device and storage medium for document image
CN110431563B (en) Method and device for correcting image
CN111209377B (en) Text processing method, device, equipment and medium based on deep learning
CN111599460A (en) Telemedicine method and system
CN111881813A (en) Data storage method and system of face recognition terminal
CN112533072A (en) Image sending method and device and electronic equipment
CN110634095B (en) Watermark adding method, watermark identifying device and electronic equipment
CN111984884A (en) Non-contact data acquisition method and device for large database
CN111401981B (en) Bidding method, device and storage medium of bidding cloud host
CN109726726B (en) Event detection method and device in video
CN110796673B (en) Image segmentation method and related product
CN116994272A (en) Identification method and device for target picture
WO2020124454A1 (en) Font switching method and related product
CN116307394A (en) Product user experience scoring method, device, medium and equipment
CN111930826A (en) Order generation method and system of software interface
CN115330522A (en) Credit card approval method and device based on clustering, electronic equipment and medium
CN111932455B (en) Information sharing method and related product
CN111899042B (en) Malicious exposure advertisement behavior detection method and device, storage medium and terminal
CN111353422B (en) Information extraction method and device and electronic equipment
CN112435671A (en) Intelligent voice control method and system for accurately recognizing Chinese
CN112991491B (en) Method and device for time-sharing display of data, electronic equipment and storage medium
CN115482308B (en) Image processing method, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant