CN109167939B - Automatic text collocation method and device and computer storage medium - Google Patents

Automatic text collocation method and device and computer storage medium Download PDF

Info

Publication number
CN109167939B
CN109167939B CN201810896921.9A CN201810896921A CN109167939B CN 109167939 B CN109167939 B CN 109167939B CN 201810896921 A CN201810896921 A CN 201810896921A CN 109167939 B CN109167939 B CN 109167939B
Authority
CN
China
Prior art keywords
target
image
determining
preset
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810896921.9A
Other languages
Chinese (zh)
Other versions
CN109167939A (en
Inventor
陈洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Ck Technology Co ltd
Original Assignee
Chengdu Ck Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Ck Technology Co ltd filed Critical Chengdu Ck Technology Co ltd
Priority to CN201810896921.9A priority Critical patent/CN109167939B/en
Publication of CN109167939A publication Critical patent/CN109167939A/en
Application granted granted Critical
Publication of CN109167939B publication Critical patent/CN109167939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an automatic text matching method, an automatic text matching device and a computer storage medium, which are used for automatically matching matched characters for images so as to provide related description for the images for a user in a specific scene. The method comprises the following steps: determining a target image; identifying the target image, and determining the category of a target object in the target image; determining a target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document; and performing preset processing on the target image based on the target configuration so as to enable the target image to be correspondingly described in a preset scene.

Description

Automatic text collocation method and device and computer storage medium
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an automatic document matching method, an automatic document matching device, and a computer storage medium.
Background
As information processing technology develops, more and more electronic devices appear in people's work and life, for example: many electronic devices such as mobile phones, tablet computers and notebook computers have a communication function, and users can perform information interaction and information sharing by using the electronic devices, so that the lives of people are enriched. In the prior art, a user can take a picture through an electronic device and record a drop of life with an image. When the user shares the images through the social platform or the user browses the images through the photo album, only the information of the images can be displayed on the images, and the recorded content is single.
Disclosure of Invention
The embodiment of the invention provides an automatic text matching method, an automatic text matching device and a computer storage medium, which are used for automatically matching matched characters for images and providing related description for the images for a user in a specific scene.
In a first aspect, the present invention provides an automatic document matching method, including:
determining a target image;
identifying the target image, and determining the category of a target object in the target image;
determining a target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document;
and performing preset processing on the target image based on the target configuration so as to enable the target image to be correspondingly described in a preset scene.
Optionally, the determining the target image includes:
when an image acquisition device is started to shoot an image, determining the currently shot image as a target image; or
When an image acquisition device is started to shoot an image, determining that a preview image corresponding to the image acquisition device is a target image; or
And determining the image selected by the user as the target image.
Optionally, the identifying the target image includes:
determining a target area in the target image;
and extracting the image of the target area, and identifying the image of the target area.
Optionally, the determining a target region in the target image includes:
determining a target area as a focusing area of an image acquisition device under the condition that the target image is a preview image of the image acquisition device; or
Determining a target area as a foreground area in the target image under the condition that the target image contains depth-of-field information; or
And determining the area selected by the user in the target image as a target area.
Optionally, the determining, based on the correspondence between the category and the profile, a target profile corresponding to the category to which the target profile belongs includes:
determining the associated content corresponding to the belonged category;
matching the associated content with preset core semantics in a preset configuration document database, acquiring a configuration document corresponding to the successfully matched preset core semantics, and determining the configuration document corresponding to the successfully matched preset core semantics as a target configuration document, wherein the preset configuration document database comprises a corresponding relation between the preset core semantics and the configuration document.
Optionally, before determining the target profile corresponding to the category to which the target profile belongs based on the correspondence between the category and the profile, the method further includes:
performing word segmentation processing on the character segments of the preset literary works to obtain word segmentation segments;
based on the word segmentation fragments, determining core semantics corresponding to the word segments, taking the word segments as an allocation, and adding the corresponding relation between the core semantics and the word segments as the corresponding relation between preset core semantics and the allocation to the preset allocation database.
Optionally, the determining that the profile corresponding to the successfully matched preset core semantic is a target profile includes:
if the associated content successfully matched with the associated content comprises a plurality of preset core semantics, displaying a plurality of configuration documents corresponding to the plurality of preset core semantics;
and determining that the first profile selected by the user from the plurality of profiles is the target profile.
Optionally, after determining that the profile selected by the user from the plurality of profiles is the target profile, the method further includes:
and updating the selected times of the first collocation text so that the first collocation text is displayed in the candidate collocation text if the selected times are more than the preset times when the matching target collocation text of the affiliated category is performed next time.
Optionally, the performing, based on the target configuration document, a preset process on the target image includes:
and displaying the target configuration file in a preset area of the target image in an overlapping manner.
Optionally, the performing, based on the target configuration document, a preset process on the target image includes:
and storing the corresponding relation between the target image and the target recipe, so that when the user shares the target image to a preset social platform, the target recipe is automatically filled into a text input box in a sharing interface.
In a second aspect, an embodiment of the present invention provides an automatic document matching apparatus, including:
a first determination unit configured to determine a target image;
the second determining unit is used for identifying the target image and determining the category of the target object in the target image;
a third determining unit, configured to determine, based on a correspondence between a category and a profile, a target profile corresponding to the category to which the target profile belongs;
and the processing unit is used for carrying out preset processing on the target image based on the target configuration text so as to enable the target image to be correspondingly described in a preset scene.
Optionally, the first determining unit is specifically configured to:
when an image acquisition device is started to shoot an image, determining the currently shot image as a target image; or
When an image acquisition device is started to shoot an image, determining that a preview image corresponding to the image acquisition device is a target image; or
And determining the image selected by the user as the target image.
Optionally, the second determining unit is specifically configured to:
determining a target area in the target image;
and extracting the image of the target area, and identifying the image of the target area.
Optionally, the second determining unit is specifically configured to:
determining a target area as a focusing area of an image acquisition device under the condition that the target image is a preview image of the image acquisition device; or
Determining a target area as a foreground area in the target image under the condition that the target image contains depth-of-field information; or
And determining the area selected by the user in the target image as a target area.
Optionally, the third determining unit is specifically configured to:
determining the associated content corresponding to the belonged category;
matching the associated content with preset core semantics in a preset configuration document database, acquiring a configuration document corresponding to the successfully matched preset core semantics, and determining the configuration document corresponding to the successfully matched preset core semantics as a target configuration document, wherein the preset configuration document database comprises a corresponding relation between the preset core semantics and the configuration document.
Optionally, the apparatus further includes a database establishing unit, specifically configured to:
before determining the target configuration text corresponding to the category based on the corresponding relation between the category and the configuration text, performing word segmentation processing on the text segments of the preset literary works to obtain word segmentation segments;
based on the word segmentation fragments, determining core semantics corresponding to the word segments, taking the word segments as an allocation, and adding the corresponding relation between the core semantics and the word segments as the corresponding relation between preset core semantics and the allocation to the preset allocation database.
Optionally, the third determining unit is specifically configured to:
if the associated content successfully matched with the associated content comprises a plurality of preset core semantics, displaying a plurality of configuration documents corresponding to the plurality of preset core semantics;
and determining that the first profile selected by the user from the plurality of profiles is the target profile.
Optionally, the apparatus further includes an updating unit, specifically configured to:
and after determining that the collocation selected by the user from the multiple collocation documents is the target collocation document, updating the selection times of the first collocation document so as to display the first collocation document in the candidate collocation document if the selection times are more than the preset times when the affiliated category is matched with the target collocation document next time.
Optionally, the processing unit is specifically configured to:
and displaying the target configuration file in a preset area of the target image in an overlapping manner.
Optionally, the processing unit is specifically configured to:
and storing the corresponding relation between the target image and the target recipe, so that when the user shares the target image to a preset social platform, the target recipe is automatically filled into a text input box in a sharing interface.
In a third aspect, an embodiment of the present invention provides a terminal system, where the automatic document matching apparatus includes a processor, and the processor is configured to implement, when executing a computer program stored in a memory, the steps of the automatic document matching method described in the foregoing first aspect embodiment.
In a fourth aspect, an embodiment of the present invention provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the automatic text collocation method as described in the foregoing first aspect embodiment.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
in the technical scheme of the embodiment of the invention, after the target image is determined, the target image can be identified, the category of the target object in the target image is determined, the target profile corresponding to the category of the target object in the target image is determined based on the corresponding relation between the category and the profile, and further, the target image is subjected to preset processing based on the target profile, so that the target image can be correspondingly described in a preset scene. The preset scene can be an image sharing scene or a scene such as a target image stored after the target image is shot, so that a text paragraph which is attached to the target image can be automatically determined for the target image, the target configuration file corresponding to the target image can be displayed in time when a user shares the target image or browses the target image, the recording information corresponding to the image is enriched, the operation convenience and the configuration file quality of the user in shooting and sharing are improved, the operation difficulty of the user in image sharing is reduced, and the user experience is improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic diagram of one possible terminal system;
FIG. 2 is a flowchart of an automatic document matching method according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of an automatic document matching device according to a second embodiment of the present invention. .
Detailed Description
The embodiment of the invention provides an automatic text matching method, an automatic text matching device and a computer storage medium, which are used for automatically matching matched characters for images and providing related description for the images for a user in a specific scene. The method comprises the following steps: determining a target image; identifying the target image, and determining the category of a target object in the target image; determining a target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document; and performing preset processing on the target image based on the target configuration so as to enable the target image to be correspondingly described in a preset scene.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
In order to facilitate the description of the technical solution in the embodiment of the present invention, a terminal system to which the method for automatically configuring a document in the embodiment of the present invention is applied is first described. Please refer to fig. 1, which is a schematic diagram of a possible terminal system. In fig. 1, a terminal system 100 is a system including a touch input device 101. However, it should be understood that the system may also include one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick. The operation platform of the terminal system 100 may be adapted to operate one or more operating systems, such as general operating systems, e.g., an Android operating system, a Windows operating system, an apple IOS operating system, a BlackBerry operating system, and a google Chrome operating system. However, in other embodiments, the terminal system 100 may run a dedicated operating system instead of a general-purpose operating system.
In some embodiments, the terminal system 100 may also support the running of one or more applications, including but not limited to one or more of the following: disk management applications, secure encryption applications, rights management applications, system setup applications, word processing applications, presentation slide applications, spreadsheet applications, database applications, gaming applications, telephone applications, video conferencing applications, email applications, instant messaging applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, digital video player applications, and the like.
The operating system and various applications running on the terminal system may use the touch input device 101 as a physical input interface device for the user. The touch input device 101 has a touch surface as a user interface. Optionally, the touch surface of the touch input device 101 is a surface of the display screen 102, and the touch input device 101 and the display screen 102 together form the touch-sensitive display screen 120, however, in other embodiments, the touch input device 101 has a separate touch surface that is not shared with other device modules. The touch sensitive display screen still further includes one or more contact sensors 106 for detecting whether a contact has occurred on the touch input device 101.
The touch sensitive Display 120 may alternatively use LCD (Liquid Crystal Display) technology, LPD (light-emitting polymer Display) technology, or LED (light-emitting diode) technology, or any other technology that enables image Display. Touch-sensitive display screen 120 further may detect contact and any movement or breaking of contact using any of a variety of touch sensing technologies now known or later developed, such as capacitive sensing technologies or resistive sensing technologies. In some embodiments, touch-sensitive display screen 120 may detect a single point of contact or multiple points of contact and changes in their movement simultaneously.
In addition to the touch input device 101 and the optional display screen 102, the terminal system 100 can also include memory 103 (which optionally includes one or more computer-readable storage media), a memory controller 104, and one or more processors (processors) 105, which can communicate via one or more signal buses 107.
Memory 103 may include Cache (Cache), high-speed Random Access Memory (RAM), such as common double data rate synchronous dynamic random access memory (DDR SDRAM), and may also include non-volatile memory (NVRAM), such as one or more read-only memories (ROM), disk storage devices, Flash memory (Flash) memory devices, or other non-volatile solid-state memory devices, such as compact disks (CD-ROM, DVD-ROM), floppy disks, or data tapes, among others. Memory 103 may be used to store the aforementioned operating system and application software, as well as various types of data generated and received during system operation. Memory controller 104 may control other components of system 100 to access memory 103.
The processor 105 is used to run or execute the operating system, various software programs, and its own instruction set stored in the internal memory 103, and is used to process data and instructions received from the touch input device 101 or from other external input pathways to implement various functions of the system 100. The processor 105 may include, but is not limited to, one or more of a Central Processing Unit (CPU), a general purpose image processor (GPU), a Microprocessor (MCU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), and an Application Specific Integrated Circuit (ASIC). In some embodiments, processor 105 and memory controller 104 may be implemented on a single chip. In some other embodiments, they may be implemented separately on separate chips from each other.
In fig. 1, a signal bus 107 is configured to connect the various components of the end system 100 for communication. It should be understood that the configuration and connection of the signal bus 107 shown in fig. 1 is exemplary and not limiting. Depending on the specific application environment and hardware configuration requirements, in other embodiments, the signal bus 107 may adopt other different connection manners, which are familiar to those skilled in the art, and conventional combinations or changes thereof, so as to realize the required signal connection among the various components.
Further, in some embodiments, the terminal system 100 may also include peripheral I/O interfaces 111, RF circuitry 112, audio circuitry 113, speakers 114, microphone 115, and camera module 116. The device 100 may also include one or more heterogeneous sensor modules 118.
RF (radio frequency) circuitry 112 is used to receive and transmit radio frequency signals to enable communication with other communication devices. The RF circuitry 112 may include, but is not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 112 optionally communicates by wireless communication with a network, such as the internet (also known as the World Wide Web (WWW)), an intranet, and/or a wireless network (such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN)), among other devices. The RF circuitry 112 may also include circuitry for detecting Near Field Communication (NFC) fields. The wireless communication may employ one or more communication standards, protocols, and techniques including, but not limited to, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, data-only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth Low Power consumption, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n and/or IEEE 802.11ac), Voice over Internet protocol (VoIP), Wi-MAX, email protocols (e.g., Internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed at the time of filing date of this application.
Audio circuitry 113, speaker 114, and microphone 115 provide an audio interface between a user and end system 100. The audio circuit 113 receives audio data from the external I/O port 111, converts the audio data into an electric signal, and transmits the electric signal to the speaker 114. The speaker 114 converts the electrical signals into human-audible sound waves. The audio circuit 113 also receives electrical signals converted by the microphone 115 from sound waves. The audio circuit 113 may further convert the electrical signal to audio data and transmit the audio data to the external I/O port 111 for processing by an external device. The audio data may be transferred to the memory 103 and/or the RF circuitry 112 under the control of the processor 105 and the memory controller 104. In some implementations, the audio circuit 113 may also be connected to a headset interface.
The camera module 116 is used to take still images and video according to instructions from the processor 105. The camera module 116 may have a lens device 1161 and an image sensor 1162, and may be capable of receiving an optical signal from the outside through the lens device 1161 and converting the optical signal into an electrical signal through the image sensor 1162, such as a metal-oxide complementary photo transistor (CMOS) sensor or a Charge Coupled Device (CCD) sensor. The camera module 116 may further have an image processor (ISP)1163 for processing and correcting the aforementioned electric signals and converting them into specific image format files, such as JPEG (joint photographic experts group) image files, TIFF (tagged image file format) image files, and the like. The image file may be sent to memory 103 for storage or to RF circuitry 112 for transmission to an external device, according to instructions from processor 105 and memory controller 104.
External I/O port 111 provides an interface for end system 100 to other external devices or system surface physical input modules. The surface physical input module may be a key, a keyboard, a dial, etc., such as a volume key, a power key, a return key, and a camera key. The interface provided by the external I/O port 111 may also include a Universal Serial Bus (USB) interface (which may include USB, Mini-USB, Micro-USB, USB Type-C, etc.), a Thunderbolt (Thunderbolt) interface, a headset interface, a video transmission interface (e.g., a high definition multimedia HDMI interface, a mobile high definition link (MHL) interface), an external storage interface (e.g., an external memory card SD card interface), a subscriber identity module card (SIM card) interface, and so forth.
The sensor module 118 may have one or more sensors or sensor arrays, including but not limited to: 1. a location sensor, such as a Global Positioning Satellite (GPS) sensor, a beidou satellite positioning sensor or a GLONASS (GLONASS) satellite positioning system sensor, for detecting the current geographical location of the device; 2. the acceleration sensor, the gravity sensor and the gyroscope are used for detecting the motion state of the equipment and assisting in positioning; 3. a light sensor for detecting external ambient light; 4. the distance sensor is used for detecting the distance between an external object and the system; 5. the pressure sensor is used for detecting the pressure condition of system contact; 6. and the temperature and humidity sensor is used for detecting the ambient temperature and humidity. The sensor module 118 may also add any other kind and number of sensors or sensor arrays as the application requires.
In some embodiments of the present invention, the image processing method of the present invention may be performed by the processor 105 by invoking various components of the terminal system 100 via instructions. The program required by the processor 105 to execute the image processing method of the present invention is stored by the memory 103.
The above is an introduction of the terminal system to which the automatic document matching method is applied, and next, the automatic document matching method will be described. Referring to fig. 2, a flowchart of a method for automatically allocating a document according to an embodiment of the present invention is shown in fig. 2, where the method includes the following steps:
s201: determining a target image;
s202: identifying the target image, and determining the category of a target object in the target image;
s203: determining a target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document;
s204: and performing preset processing on the target image based on the target configuration so as to enable the target image to be correspondingly described in a preset scene.
Specifically, the automatic text collocation method in this embodiment may be applied to the terminal system 100, where the terminal system 100 may be a mobile terminal device, such as a mobile phone, a tablet computer, a notebook computer, and the like, and the terminal system 100 may also be a desktop computer and the like, and certainly may also be other electronic devices, and the application is not limited thereto.
First, by step S201, a target image for which automatic document matching is required is determined.
The method of determining the target image includes, but is not limited to, the following three methods:
the first method comprises the following steps: when the image acquisition device is started to shoot an image, the currently shot image is determined to be a target image.
Specifically, in this embodiment, the electronic device is configured with an image capturing device, such as: a camera is provided. When a user starts a photographing function, namely a camera is started to photograph, and the currently photographed image is determined to be a target image. For example, when a user wants to share the environment of the current position through the social platform, the user clicks a sharing key in a sharing interface of the social platform, then selects a 'click to take a picture' key in the sharing interface and then starts a camera to take a picture, and at the moment, the currently taken picture is determined to be a target picture, and then automatic text matching is performed on the target picture according to a subsequent introduction method. Or for example, after the user starts the camera application, the camera is started to take a picture, the currently taken picture is determined to be the target picture, and then the automatic text matching is carried out on the target picture according to a method introduced later. Namely: after the user takes a picture, the picture after taking the picture can be matched with the text.
And the second method comprises the following steps: when an image acquisition device is started to shoot an image, determining that a preview image corresponding to the image acquisition device is a target image.
Specifically, in this embodiment, the electronic device is configured with an image capturing device, such as: a camera is provided. The method comprises the steps that a photographing function is started by a user, namely, a camera is started to photograph, a preview image collected by the camera is displayed on a display screen of the electronic equipment, and the preview image is determined to be a target image, so that the user can configure a file for the preview image collected by the camera in real time in the real-time photographing process. Further, in order to reduce the amount of data to be processed, when the image capturing apparatus performs a shooting process, a target image may be determined at intervals of a certain frame (for example, a group of pictures GOP), such as: determining the target image every 10 frames of images, for example, determining the 1 st frame of preview image as the target image, and further performing automatic document matching on the target image, wherein the method in the embodiment is not triggered when the 2 nd to 10 th frames of images are shot, and when the 11 th frame of preview image is shot, the 11 th frame of preview image is taken as the target image, and automatic document matching is triggered on the target image, and so on. Or, when the time length that the image acquisition device stays at a certain position is longer than a preset time length (for example, 3 seconds, 5 seconds, and the like), determining the preview image acquired by the image acquisition device as a target image, and triggering automatic text matching on the target image. Similarly, the method for determining the target image may be applied to a sharing scene or a photographing scene, which is described in detail in the foregoing section, and is not repeated herein.
And the third is that: and determining the image selected by the user as the target image.
Specifically, in this embodiment, the image selected by the user may be determined as the target image. Such as: the user browses the image shot by the user in the electronic photo album, or the user browses the network image, finds that one image is very favorite, selects the image, correspondingly displays corresponding functional options on a display screen of the electronic equipment, wherein the corresponding functional options comprise options for automatically collocating the image, and when the user clicks the options, the image is determined to be the target image. Of course, the electronic device may also preset a preset trigger operation for selecting an image as a target image, such as: and long-time pressing of the images, double-click of the images and the like, wherein when a user performs preset triggering operation on a certain image, the image is taken as a target image, and then automatic text matching is performed on the target image in a triggering mode.
Further, after the target image is determined, in step S202, object recognition is performed on the target object in the target image, and the category to which the target object in the target image belongs is determined.
Specifically, all pixel images of the target image may be directly extracted for object recognition. In order to improve the recognition efficiency, a target area where a target object may exist may be determined, and then an image of the target area may be recognized. Specifically, the determination of the target region can be divided into, but not limited to, the following three ways:
the first mode is as follows: and under the condition that the target image is a preview image of the image acquisition device, determining that the target area is a focusing area of the image acquisition device.
Specifically, in this embodiment, following the foregoing example, when the target image is a preview image captured by the image capture device, the image capture device has a focusing area, and the user will usually place the shooting focus on the object of interest, so that the target object is usually located in the focusing area, and the focusing area can be used as the target area, and the image of the focusing area is extracted for object recognition.
The second mode is as follows: and under the condition that the target image comprises depth information, determining a target area as a foreground area in the target image.
Specifically, in this embodiment, the mobile terminal device includes two image capturing devices, for example, two cameras, and images captured by the two cameras are fused to form a target image, where the target image includes depth information. In the target image, the background region is usually blurred to clearly highlight the object of the foreground region, so that the target object is usually located in the foreground region, and the foreground region can be used as the target region to extract the image of the foreground region for object recognition.
The third mode is as follows: and determining the area selected by the user in the target image as a target area.
Specifically, in this embodiment, if the user selects the target image, the target area may be manually set, for example: one area is circled in the target image to be used as a target area. In addition, an area frame can be set, a user can drag the frame to circle and select the target object, and the area corresponding to the area frame is used as the target area. Of course, in a specific implementation process, a default region may also be set as the target region, for example, a central region of each image, and in a specific implementation process, a manner of setting the target region may be set according to actual needs, which is not limited in this application.
Further, after the target area is determined by any method, the image of the target area is extracted for object recognition, the object recognition can be completed by a trained deep learning neural network, or the recognition can be completed by an Adaptive learning enhancement algorithm Adaptive Boost, of course, other object recognition methods can also be adopted, and the method is not limited in the application. For example, the image recognition technology may be used to determine the category to which the target object belongs, i.e., classify the object, such as determining that the category to which the target object in the target image belongs is steak or ice cream.
After the category to which the target object in the target image belongs is determined in step S202, the target profile corresponding to the category to which the target object belongs is determined based on the correspondence between the category and the profile in step S203. In the specific implementation process, the method can be realized by the following steps:
determining the associated content corresponding to the belonged category;
matching the associated content with preset core semantics in a preset configuration document database, acquiring a configuration document corresponding to the successfully matched preset core semantics, and determining the configuration document corresponding to the successfully matched preset core semantics as a target configuration document, wherein the preset configuration document database comprises a corresponding relation between the preset core semantics and the configuration document.
Before determining the target configuration text corresponding to the category based on the corresponding relation between the category and the configuration text, performing word segmentation processing on the text segments of the preset literary works to obtain word segmentation segments; based on the word segmentation segment, determining the core semantics corresponding to the character segment, taking the literature segment as a collocation, and taking the corresponding relation between the core semantics and the character segment as the corresponding relation between the preset core semantics and the collocation, and adding the corresponding relation to the preset collocation database.
Specifically, in this embodiment, the preset profile database may be established in advance. The preset profile database can be added manually, such as: when a user reads literary works or web pages, a favorite text segment A1 is selected, and the core semantics A2 of the text segment is marked, wherein the core semantics can be input and set by the user and can also be automatically analyzed and obtained through a semantic analysis algorithm. Of course, the user can input a certain favorite literary work into the system, and the system automatically carries out word segmentation processing on the sentences in the literary work to obtain word segmentation segments. For example, it can decompose the "all-people bitter and hot" into "all", "bitter" and "hot". The word segmentation algorithm can be a word segmentation algorithm based on character string matching, a full segmentation method, a word segmentation algorithm for constructing words by characters, and the like. After the word segmentation process obtains the word segmentation segment, the core semantics of the word segmentation segment can be determined based on the obtained word segmentation segment, for example, machine training is performed by using a Semantic analysis algorithm, and the Semantic analysis algorithm can use a probability Latent Semantic analysis plsa (probabilistic Latent language analysis), a Non-negative Matrix Factorization NMF (Non-negative Matrix Factorization), a Latent Dirichlet distribution lda (Latent Dirichlet allocation), and other topic model algorithms.
Thus, after the character segments and the core semantics corresponding to the character segments are determined, the corresponding relation between the determined core semantics and the corresponding character segments can be used as the corresponding relation between the preset core semantics and the collocation text and added to the preset collocation text database. Therefore, the preset recipe recorded in the preset recipe database is the characters loved by the user, and the personalized recipe library can be provided for different users, so that the requirements of different users are met. Moreover, the user can update the preset profile database in real time, and the user can add the interested preset profile to the database at any time, and the adding mode is described in detail in the foregoing embodiments, and the details are not repeated in the present application.
Further, in the specific implementation process, different preset configuration document databases can be set in a classified mode. Such as: different preset configuration document databases can be established for different literary works, such as: a preset configuration language database corresponding to the dream of Red mansions and a preset configuration language database corresponding to the rehearsal of three countries. Of course, different preset profile databases may also be established for different authors, such as: the preset recipe database corresponding to Zhang Ering is a text segment written by Zhang Ering, the preset recipe database corresponding to Zhu Zi Qing is a text segment written by the author Zi Xie. When the target image is automatically matched, a list of a plurality of preset matched document databases can be displayed for a user to select one or more of the preset matched documents. In a specific implementation process, the way of establishing the preset profile database may be set according to actual needs, and the present application is not limited herein.
Furthermore, after the preset configuration document database is preset, the associated contents corresponding to different categories can be preset, such as: the associated contents of the categories such as waterfalls, lake waters, mountains and the like can be set as landscapes, the associated contents of the categories such as fans, air conditioners and the like are summer days, cool and the like, the associated contents corresponding to different categories can be determined through machine learning, and can also be set manually, and the application is not limited. After determining the category to which the target object in the target image belongs, determining the associated content corresponding to the category to which the target object belongs, such as: when some ice cream is shot, the associated content related to the ice cream category comprises summer, cold drinks and coolness, after the associated content corresponding to the category of the target object in the target image is determined, the associated content can be matched with the preset core semantics corresponding to the preset configuration texts in the preset configuration text database, the preset configuration texts corresponding to the successfully matched preset core semantics are obtained, and the preset configuration texts are used as target configuration texts. Such as: when a certain ice cream is shot, the category of the ice cream is determined to be the ice cream, and the associated content corresponding to the ice cream category comprises summer and coolness. The preset core semantic meaning of the preset collocation text 'first summer is clear and fragrant and grass is not stopped' is summer, the preset collocation text 'people all feel hot and love summer long' is summer, and the preset core semantic meaning of the preset collocation text 'heart is cool and quiet noises such as clearing' is cool, so that the target collocation text which can be provided for the user is the three preset collocation texts.
Further, in this embodiment, if the content successfully matched with the associated content includes a plurality of preset core semantics, a plurality of profiles corresponding to the plurality of preset core semantics are displayed, and a first profile selected by the user from the plurality of profiles is determined as a target profile.
Specifically, in the present embodiment, following the above example, when an ice cream is shot, the target object of the target image is identified as the ice cream category, and the matching text determined includes three pieces of text, which are "first summer is always clear, grass is not yet stopped", "people are all bitter and hot, i love cool and cool in summer", "heart is empty, and the certificate is loud. And displaying the three profiles in a display interface of the electronic equipment in a list form so that a user can select a final target profile, and taking the profile selected by the user as the target profile.
Further, the method in this embodiment may further record the number of times that the user selects each collocation in the preset collocation database as the target collocation, so that, when the collocation is performed this time, the user selects the first collocation as the target collocation, and the number of times of selection of the first collocation is updated, so that when the category to which the user belongs is matched with the target collocation next time, if the corresponding number of times of selection is greater than the preset number of times, the first collocation is displayed in the candidate collocation.
Specifically, in this embodiment, the recipe that the user may select as the target recipe many times is added to the personalized collection of the user, so that the number of times of selecting the selected recipe is updated each time the automatic recipe is performed, if the updated number of times of selecting is greater than a preset number of times (e.g., 5 times, 10 times, etc.), the recipe is added to the personalized collection corresponding to the category to which the user belongs, and when the user next prepares the category to which the user belongs, the recipes in the personalized collection corresponding to the category to which the user belongs may be displayed in the candidate recipes, may be arranged according to the number of times of selecting, and the recipes in the personalized collection may be displayed as candidate recipes, and may be displayed separately from other candidate recipes or after other candidate recipes. The profiles in the personalized collections corresponding to the various categories can support the adding/deleting operation of the user, namely the user can add the profiles to the personalized collections according to needs, the profiles in the personalized collections can also be deleted, and when the deleting operation is carried out, the selecting times of the deleted profiles can be reset to be 0, so that the profiles which are disliked by the user can be prevented from being pushed for the user next time. By the mode, when the matched document is recommended again in the follow-up process, personalized recommendation can be given by referring to personal preferences of the user, and the intelligence of the automatic document matching function and the user attaching degree are improved.
Further, after determining the target matching text for the target image, the step S204 may perform the preset processing on the target image in the following two ways.
The first method comprises the following steps: and displaying the target configuration file in a preset area of the target image in an overlapping manner.
Specifically, in this embodiment, the target recipe can be directly displayed in an overlaid manner in the target image, the target recipe is displayed in an editable state, and the user can edit and adjust the target recipe. Specifically, for example, setting of text style, size and color, setting of typesetting and position dragging adjustment, text addition and deletion and the like are supported. After the user confirms that the target configuration text is edited, the target configuration text and the target image can be synthesized into one image to be stored, and the target configuration text in the synthesized image cannot be edited. Further, the preset region in the target image where the target profile is superimposed and displayed may be another region in the target image except for the region where the target object is located, so as to avoid shielding the target object, the displayed color may be distinguished from the background color, and in the specific implementation process, the preset region and the display format of the target text may be set according to actual needs. Therefore, when browsing the photo album, the user can browse the target image after the document matching, or the user can directly share the target image after the document matching to the social platform.
And the second method comprises the following steps: and storing the corresponding relation between the target image and the target recipe, so that when the user shares the target image to a preset social platform, the target recipe is automatically filled into a text input box in a sharing interface.
Specifically, in this embodiment, a corresponding relationship between the target configuration document and the target image may be established, and the corresponding relationship between the target image and the target configuration document may be saved while the target image is saved. The target configuration text corresponds to the target image and can be stored in a text mode. Furthermore, when the user shares the target image with the social platform, the electronic device can automatically acquire the target configuration document corresponding to the target image, and automatically fill the target configuration document into the text input box in the sharing interface, so as to realize automatic configuration. The user can edit the target configuration document automatically filled into the text input box according to actual needs, such as: pruning content may be added. This way. The matched text content and the image support one-key sharing, so that the operation cost of the user in social sharing can be greatly reduced, the sharing difficulty is reduced, and the user experience is effectively improved.
Therefore, through the automatic document matching method in the embodiment, the text paragraphs which are attached to the target image and are more elegant can be automatically determined for the target image, so that when a user shares the target image or browses the target image, the target document matching corresponding to the target image is displayed in time, and the record information corresponding to the image is enriched, so that the operation convenience and the document matching quality of photographing and sharing of the user are improved, the operation difficulty of sharing the image by the user is reduced, and the user experience is improved.
Referring to fig. 3, a second embodiment of the present invention provides an automatic document matching apparatus, including:
a first determination unit 301 for determining a target image;
a second determining unit 302, configured to identify the target image, and determine a category to which a target object in the target image belongs;
a third determining unit 303, configured to determine, based on a correspondence between a category and a profile, a target profile corresponding to the category to which the target profile belongs;
a processing unit 304, configured to perform preset processing on the target image based on the target configuration so as to enable corresponding description on the target image in a preset scene.
As an optional embodiment, the first determining unit is specifically configured to:
when an image acquisition device is started to shoot an image, determining the currently shot image as a target image; or
When an image acquisition device is started to shoot an image, determining that a preview image corresponding to the image acquisition device is a target image; or
And determining the image selected by the user as the target image.
As an optional embodiment, the second determining unit is specifically configured to:
determining a target area in the target image;
and extracting the image of the target area, and identifying the image of the target area.
Optionally, the second determining unit is specifically configured to:
determining a target area as a focusing area of an image acquisition device under the condition that the target image is a preview image of the image acquisition device; or
Determining a target area as a foreground area in the target image under the condition that the target image contains depth-of-field information; or
And determining the area selected by the user in the target image as a target area.
As an optional embodiment, the third determining unit is specifically configured to:
determining the associated content corresponding to the belonged category;
matching the associated content with preset core semantics in a preset configuration document database, acquiring a configuration document corresponding to the successfully matched preset core semantics, and determining the configuration document corresponding to the successfully matched preset core semantics as a target configuration document, wherein the preset configuration document database comprises a corresponding relation between the preset core semantics and the configuration document.
As an optional embodiment, the apparatus further includes a database establishing unit, specifically configured to:
before determining the target configuration text corresponding to the category based on the corresponding relation between the category and the configuration text, performing word segmentation processing on the text segments of the preset literary works to obtain word segmentation segments;
based on the word segmentation fragments, determining core semantics corresponding to the word segments, taking the word segments as an allocation, and adding the corresponding relation between the core semantics and the word segments as the corresponding relation between preset core semantics and the allocation to the preset allocation database.
As an optional embodiment, the third determining unit is specifically configured to:
if the associated content successfully matched with the associated content comprises a plurality of preset core semantics, displaying a plurality of configuration documents corresponding to the plurality of preset core semantics;
and determining that the first profile selected by the user from the plurality of profiles is the target profile.
As an optional embodiment, the apparatus further includes an updating unit, specifically configured to:
and after determining that the collocation selected by the user from the multiple collocation documents is the target collocation document, updating the selection times of the first collocation document so as to display the first collocation document in the candidate collocation document if the selection times are more than the preset times when the affiliated category is matched with the target collocation document next time.
As an optional embodiment, the processing unit is specifically configured to:
and displaying the target configuration file in a preset area of the target image in an overlapping manner.
As an optional embodiment, the processing unit is specifically configured to:
and storing the corresponding relation between the target image and the target recipe, so that when the user shares the target image to a preset social platform, the target recipe is automatically filled into a text input box in a sharing interface.
Based on the same inventive concept as the automatic document matching method in the foregoing embodiment, a third embodiment of the present invention further provides a terminal system, please refer to fig. 1, and the apparatus of this embodiment includes: a processor 105, a memory 103 and a computer program stored in said memory and executable on said processor, for example a program corresponding to the method for automatic provisioning in the first embodiment. The processor implements the steps in the automatic configuration documents in the first embodiment described above when executing the computer program. Alternatively, the processor implements the functions of the modules/units in the apparatus of the second embodiment described above when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the computer apparatus.
For the description of the terminal system memory, the processor and other structures, please refer to the above, and the description is not repeated here.
Further, the apparatus comprises a processor 105 having the following functions:
determining a target image;
identifying the target image, and determining the category of a target object in the target image;
determining a target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document;
and performing preset processing on the target image based on the target configuration so as to enable the target image to be correspondingly described in a preset scene.
Further, the apparatus comprises a processor 105 having the following functions:
when an image acquisition device is started to shoot an image, determining the currently shot image as a target image; or
When an image acquisition device is started to shoot an image, determining that a preview image corresponding to the image acquisition device is a target image; or
And determining the image selected by the user as the target image.
Further, the apparatus comprises a processor 105 having the following functions:
determining a target area in the target image;
and extracting the image of the target area, and identifying the image of the target area.
Further, the apparatus comprises a processor 105 having the following functions:
determining a target area as a focusing area of an image acquisition device under the condition that the target image is a preview image of the image acquisition device; or
Determining a target area as a foreground area in the target image under the condition that the target image contains depth-of-field information; or
And determining the area selected by the user in the target image as a target area.
Further, the apparatus comprises a processor 105 having the following functions:
determining the associated content corresponding to the belonged category;
matching the associated content with preset core semantics in a preset configuration document database, acquiring a configuration document corresponding to the successfully matched preset core semantics, and determining the configuration document corresponding to the successfully matched preset core semantics as a target configuration document, wherein the preset configuration document database comprises a corresponding relation between the preset core semantics and the configuration document.
Further, the apparatus comprises a processor 105 having the following functions:
before determining the target configuration text corresponding to the category based on the corresponding relation between the category and the configuration text, performing word segmentation processing on the text segments of the preset literary works to obtain word segmentation segments;
based on the word segmentation fragments, determining core semantics corresponding to the word segments, taking the word segments as an allocation, and adding the corresponding relation between the core semantics and the word segments as the corresponding relation between preset core semantics and the allocation to the preset allocation database.
Further, the apparatus comprises a processor 105 having the following functions:
if the associated content successfully matched with the associated content comprises a plurality of preset core semantics, displaying a plurality of configuration documents corresponding to the plurality of preset core semantics;
and determining that the first profile selected by the user from the plurality of profiles is the target profile.
Further, the apparatus comprises a processor 105 having the following functions:
and after determining that the collocation selected by the user from the multiple collocation documents is the target collocation document, updating the selection times of the first collocation document so as to display the first collocation document in the candidate collocation document if the selection times are more than the preset times when the affiliated category is matched with the target collocation document next time.
Further, the apparatus comprises a processor 105 having the following functions:
and displaying the target configuration file in a preset area of the target image in an overlapping manner.
Further, the apparatus comprises a processor 105 having the following functions:
and storing the corresponding relation between the target image and the target recipe, so that when the user shares the target image to a preset social platform, the target recipe is automatically filled into a text input box in a sharing interface.
A fourth embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and the automatic document matching apparatus integrated function unit in the second embodiment of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software function unit and sold or used as a separate product. Based on such understanding, all or part of the flow of the automatic document matching method according to the first embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the above-mentioned method embodiments when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (11)

1. An automatic document matching method, comprising:
determining a target image;
identifying the target image, and determining the category of a target object in the target image;
determining a target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document, wherein the determining the target configuration document corresponding to the belonged category based on the corresponding relation between the category and the configuration document comprises the following steps:
determining the associated content corresponding to the belonged category;
matching the associated content with a preset core semantic in a preset collocation document database, acquiring a collocation document corresponding to the successfully matched preset core semantic, and determining the collocation document corresponding to the successfully matched preset core semantic as a target collocation document, wherein the preset collocation document database comprises a corresponding relation between the preset core semantic and the collocation document;
before determining the target profile corresponding to the belonged category based on the corresponding relationship between the category and the profile, the method further comprises:
performing word segmentation processing on the character segments of the preset literary works to obtain word segmentation segments;
determining core semantics corresponding to the character fragments based on the word segmentation fragments, taking the character fragments as an alignment, and adding a corresponding relation between the core semantics and the character fragments as a corresponding relation between preset core semantics and the alignment to a preset alignment database;
and performing preset processing on the target image based on the target configuration so as to enable the target image to be correspondingly described in a preset scene.
2. The method of claim 1, wherein the determining that the profile corresponding to the successfully matched preset core semantic is a target profile comprises:
if the associated content successfully matched with the associated content comprises a plurality of preset core semantics, displaying a plurality of configuration documents corresponding to the plurality of preset core semantics;
and determining that the first profile selected by the user from the plurality of profiles is the target profile.
3. The method of claim 2, wherein after determining that a profile selected by a user from the plurality of profiles is a target profile, the method further comprises:
and updating the selected times of the first collocation text so that the first collocation text is displayed in the candidate collocation text if the selected times are more than the preset times when the matching target collocation text of the affiliated category is performed next time.
4. The method according to claim 1, wherein the pre-setting process of the target image based on the target configuration document comprises:
and displaying the target configuration file in a preset area of the target image in an overlapping manner.
5. The method according to claim 1, wherein the pre-setting process of the target image based on the target configuration document comprises:
and storing the corresponding relation between the target image and the target recipe, so that when the user shares the target image to a preset social platform, the target recipe is automatically filled into a text input box in a sharing interface.
6. The method of claim 1, wherein the determining a target image comprises:
when an image acquisition device is started to shoot an image, determining the currently shot image as a target image; or
When an image acquisition device is started to shoot an image, determining a preview image of the image acquisition device as a target image; or
And determining the image selected by the user as the target image.
7. The method of claim 1, wherein the identifying the target image comprises:
determining a target area in the target image;
and extracting the image of the target area, and identifying the image of the target area.
8. The method of claim 7, wherein the determining the target region in the target image comprises:
determining a target area as a focusing area of an image acquisition device under the condition that the target image is a preview image of the image acquisition device; or
Determining a target area as a foreground area in the target image under the condition that the target image contains depth-of-field information; or
And determining the area selected by the user in the target image as a target area.
9. An automated document preparation apparatus, comprising:
a first determination unit configured to determine a target image;
the second determining unit is used for identifying the target image and determining the category of the target object in the target image;
a third determining unit, configured to determine, based on a correspondence between a category and a profile, a target profile corresponding to the category to which the target profile belongs, and specifically configured to:
determining the associated content corresponding to the belonged category;
matching the associated content with a preset core semantic in a preset collocation document database, acquiring a collocation document corresponding to the successfully matched preset core semantic, and determining the collocation document corresponding to the successfully matched preset core semantic as a target collocation document, wherein the preset collocation document database comprises a corresponding relation between the preset core semantic and the collocation document;
the database establishing unit is used for performing word segmentation processing on the character segments of the preset literary works to obtain word segmentation segments before determining the target configuration corresponding to the category based on the corresponding relation between the category and the configuration;
determining core semantics corresponding to the character fragments based on the word segmentation fragments, taking the character fragments as an alignment, and adding a corresponding relation between the core semantics and the character fragments as a corresponding relation between preset core semantics and the alignment to a preset alignment database;
and the processing unit is used for carrying out preset processing on the target image based on the target configuration text so as to enable the target image to be correspondingly described in a preset scene.
10. A terminal system, comprising a processor and a memory:
the memory is used for storing a program for executing the method of any one of claims 1 to 8;
the processor is configured to execute programs stored in the memory.
11. A computer storage medium having stored thereon computer software instructions which, when executed by a processor, are operable to carry out the method of any one of claims 1 to 8.
CN201810896921.9A 2018-08-08 2018-08-08 Automatic text collocation method and device and computer storage medium Active CN109167939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810896921.9A CN109167939B (en) 2018-08-08 2018-08-08 Automatic text collocation method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810896921.9A CN109167939B (en) 2018-08-08 2018-08-08 Automatic text collocation method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN109167939A CN109167939A (en) 2019-01-08
CN109167939B true CN109167939B (en) 2021-07-16

Family

ID=64895090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810896921.9A Active CN109167939B (en) 2018-08-08 2018-08-08 Automatic text collocation method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN109167939B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866404B (en) * 2019-04-25 2022-04-29 华为技术有限公司 Video editing method and electronic equipment
CN110489674B (en) * 2019-07-02 2020-11-06 百度在线网络技术(北京)有限公司 Page processing method, device and equipment
CN110297934B (en) * 2019-07-04 2024-03-15 腾讯科技(深圳)有限公司 Image data processing method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248800A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method and device for adding annotation information and digital camera
CN104965921A (en) * 2015-07-10 2015-10-07 陈包容 Information matching method
KR20180060208A (en) * 2016-11-28 2018-06-07 문경록 Contents upload method on service page

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248800A (en) * 2012-02-14 2013-08-14 联想(北京)有限公司 Method and device for adding annotation information and digital camera
CN104965921A (en) * 2015-07-10 2015-10-07 陈包容 Information matching method
KR20180060208A (en) * 2016-11-28 2018-06-07 문경록 Contents upload method on service page

Also Published As

Publication number Publication date
CN109167939A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
US10885380B2 (en) Automatic suggestion to share images
CN109729420B (en) Picture processing method and device, mobile terminal and computer readable storage medium
RU2740785C2 (en) Image processing method and equipment, electronic device and graphical user interface
WO2020155711A1 (en) Image generating method and apparatus, electronic device, and storage medium
RU2640632C2 (en) Method and device for delivery of information
US8520967B2 (en) Methods and apparatuses for facilitating generation images and editing of multiframe images
CN109167939B (en) Automatic text collocation method and device and computer storage medium
US10929600B2 (en) Method and apparatus for identifying type of text information, storage medium, and electronic apparatus
EP2824633A1 (en) Image processing method and terminal device
WO2020011001A1 (en) Image processing method and device, storage medium and computer device
CN109583514A (en) A kind of image processing method, device and computer storage medium
WO2022017006A1 (en) Video processing method and apparatus, and terminal device and computer-readable storage medium
CN109151318B (en) Image processing method and device and computer storage medium
CN111182359A (en) Video preview method, video frame extraction method, video processing device and storage medium
CN108958592B (en) Video processing method and related product
US11190653B2 (en) Techniques for capturing an image within the context of a document
CN112291614A (en) Video generation method and device
CN111046210A (en) Information recommendation method and device and electronic equipment
US10460490B2 (en) Method, terminal, and computer storage medium for processing pictures in batches according to preset rules
WO2014110055A1 (en) Mixed media communication
CN109933389B (en) Data object information processing and page display method and device
CN107483817B (en) Image processing method and device
US20210377454A1 (en) Capturing method and device
CN108255917B (en) Image management method and device and electronic device
CN114390341B (en) Video recording method, electronic equipment, storage medium and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant