CN112311952A - Image processing method, system and device - Google Patents

Image processing method, system and device Download PDF

Info

Publication number
CN112311952A
CN112311952A CN202010863445.8A CN202010863445A CN112311952A CN 112311952 A CN112311952 A CN 112311952A CN 202010863445 A CN202010863445 A CN 202010863445A CN 112311952 A CN112311952 A CN 112311952A
Authority
CN
China
Prior art keywords
original image
binary number
image
related information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010863445.8A
Other languages
Chinese (zh)
Inventor
侯俊勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010863445.8A priority Critical patent/CN112311952A/en
Publication of CN112311952A publication Critical patent/CN112311952A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • H04N1/4446Hiding of documents or document information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image processing method, a system and a device, and the specific implementation scheme is as follows: acquiring an original image intercepted by a user, and performing binary number conversion on the original image to generate a binary number of the original image; acquiring related information of an original image, carrying out binary coding on the related information, and generating binary number of the related information, wherein the related information is used for representing various information related to abnormal problems in the original image; writing the binary number of the related information into the binary number of the original image by using the trained information writing model to generate a first image, wherein the information writing model is used for hiding the related information in the original image; and sending the first image to the service platform. According to the scheme, the screenshot of the page with the problem fed back by the user is realized, and the key information is quickly positioned.

Description

Image processing method, system and device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of image processing, and particularly relates to an image processing method, system and device.
Background
At present, mobile phone applications are very rich, and users may find some problems more or less during the use process, for example, a price mismatch, a coupon mismatch, a promotion information mismatch, or an opening of a blank of a merchandise activity page may be found during the shopping process of the users. When a user finds a problem, the user generally feeds back the problem to the platform by using a screenshot function, but when the platform receives a feedback image of the user, many important information cannot be acquired, such as a commodity detail page and the like.
In the prior art, fuzzy matching search is performed by using information on a feedback image, the current page information cannot be quickly responded and determined, and specific contents cannot be searched for a lot of important information of an abnormal active page, so that the problem of page abnormality cannot be completely solved.
Disclosure of Invention
The application provides an image processing method, system, device, equipment and storage medium.
According to a first aspect of the present application, there is provided an image processing method, the method comprising: acquiring an original image intercepted by a user, and performing binary number conversion on the original image to generate a binary number of the original image; acquiring related information of an original image, carrying out binary coding on the related information, and generating binary number of the related information, wherein the related information is used for representing various information related to abnormal problems in the original image; writing the binary number of the related information into the binary number of the original image by using the trained information writing model to generate a first image, wherein the information writing model is used for hiding the related information in the original image; and sending the first image to the service platform.
In some embodiments, the information writing model is obtained by training as follows: acquiring a training sample set, wherein training samples in the training sample set comprise binary numbers of original images, binary numbers of related information and first images corresponding to the binary numbers of the original images and the binary numbers of the related information, and the related information is various information which is fed back by a full amount of users and related to abnormal problems in the original images; and training to obtain an information writing model by using a deep learning algorithm and taking the binary number of the original image and the binary number of the related information included in the training samples in the training sample set as input data and taking the first image corresponding to the input binary number of the original image and the binary number of the related information as expected output data.
In some embodiments, generating the first image by writing a binary number of the related information into a binary number of the original image using the trained information writing model comprises: extracting the RGB value of each pixel point of the original image from the binary number of the original image; writing the binary number of the related information into the RGB value of each pixel point of the original image by using a writing algorithm, and generating the binary number of the written original image, wherein the writing algorithm is used for hiding the related information in each pixel point of the original image; and generating a first image according to the binary number of the written original image.
In some embodiments, writing the binary number of the related information into the RGB values of each pixel of the original image by using a writing algorithm, and generating the written binary number of the original image, includes: zeroing the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image; and writing the binary number of the related information into the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image to generate the written binary number of the original image.
In some embodiments, the relevant information includes: at least one of an address of the user access page, a commodity number of the user access, and an order number of the user access.
According to a second aspect of the present application, there is provided an image processing system comprising: and the client is used for executing the image processing method.
In some embodiments, the system further comprises: a service platform; the service platform is used for responding to the received first image sent by the client, performing binary number conversion on the first image and generating a binary number of the first image; and analyzing the binary number of the first image to obtain related information.
In some embodiments, content parsing the binary number of the first image to obtain the related information includes: extracting the RGB value of each pixel point of the first image from the binary number of the first image; extracting the RGB value of each pixel point of the first image by using an extraction algorithm to generate an extracted character string, wherein the extraction algorithm is matched with the writing algorithm; and generating related information according to the extracted character string.
In some embodiments, the service platform is further configured to analyze the related information to obtain an abnormal problem of the original image; and determining a solution of the abnormal problem according to the abnormal problem.
According to a third aspect of the present application, there is provided an image processing apparatus comprising: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is configured to acquire an original image intercepted by a user, perform binary number conversion on the original image and generate a binary number of the original image; the second acquisition unit is configured to acquire relevant information of the original image, carry out binary coding on the relevant information and generate binary numbers of the relevant information, wherein the relevant information is used for representing various information related to abnormal problems in the original image; an information writing unit configured to write a binary number of related information into a binary number of an original image by using a trained information writing model, and generate a first image, wherein the information writing model is used for hiding the related information in the original image; an image sending unit configured to send the first image to the service platform.
In some embodiments, the information writing model in the information writing unit is obtained by the following training mode: acquiring a training sample set, wherein training samples in the training sample set comprise binary numbers of original images, binary numbers of related information and first images corresponding to the binary numbers of the original images and the binary numbers of the related information, and the related information is various information which is fed back by a full amount of users and related to abnormal problems in the original images; and training to obtain an information writing model by using a deep learning algorithm and taking the binary number of the original image and the binary number of the related information included in the training samples in the training sample set as input data and taking the first image corresponding to the input binary number of the original image and the binary number of the related information as expected output data.
In some embodiments, the information writing unit includes: the extraction module is configured to extract RGB values of all pixel points of the original image from binary numbers of the original image; the writing module is configured to write the binary number of the related information into the RGB value of each pixel point of the original image by using a writing algorithm, and generate the written binary number of the original image, wherein the writing algorithm is used for hiding the related information in each pixel point of the original image; and the generating module is configured to generate a first image according to the binary number of the written original image.
In some embodiments, a write module, comprising: the initialization submodule is configured to zero the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image; and the writing submodule is configured to write the binary number of the relevant information into the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image, and generate the written binary number of the original image.
In some embodiments, the related information in the second obtaining unit includes: at least one of an address of the user access page, a commodity number of the user access, and an order number of the user access.
According to a fourth aspect of the present application, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
According to a fifth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions, wherein the computer instructions are configured to cause a computer to perform the method as described in any one of the implementations of the first aspect.
According to the technology of the application, an original image intercepted by a user is obtained, binary number conversion is carried out on the original image, binary number of the original image is generated, related information of the original image is obtained, binary coding is carried out on the related information, binary number of the related information is generated, the binary number of the related information is written into the binary number of the original image by using an information writing model obtained through training, a first image is generated, wherein the information writing model is used for hiding the related information in the original image, and the first image is sent to a service platform.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application.
Fig. 1 is a schematic diagram of a first embodiment of an image processing method according to the present application;
FIG. 2 is a scene diagram of an image processing method that can implement an embodiment of the present application;
FIG. 3 is a schematic diagram of a second embodiment of an image processing method according to the present application;
FIG. 4 is a schematic block diagram of one embodiment of an image processing system according to the present application;
FIG. 5 is a schematic block diagram of one embodiment of an image processing apparatus according to the present application;
fig. 6 is a block diagram of an electronic device for implementing an image processing method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows a schematic diagram 100 of a first embodiment of an image processing method according to the present application. The image processing method comprises the following steps:
step 101, obtaining an original image intercepted by a user, and performing binary number conversion on the original image to generate a binary number of the original image.
In this embodiment, when a user enters a page of an application, an execution subject (e.g., a mobile phone client) may obtain an original image (e.g., a mobile phone screen shot) captured by the user through an API interface provided by the system by using a monitor system screen shot event notification, and then perform binary number conversion on the original image to generate a binary number of the original image.
And 102, acquiring related information of the original image, and performing binary coding on the related information to generate a binary number of the related information.
In this embodiment, the execution subject may obtain the related information of the original image from other electronic devices or locally through a wired connection manner or a wireless connection manner, and then perform binary coding on the related information to generate a binary number of the related information. The related information may be used to characterize various information related to the abnormal problem in the original image, such as: page information and user information. The wireless connection means may include, but is not limited to, 3G, 4G, 5G connection, WiFi connection, bluetooth connection, WiMAX connection, Zigbee connection, uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
In some optional implementations of this embodiment, the related information may include: at least one of an address of the user access page, a commodity number of the user access, and an order number of the user access. By writing the information into the picture, the service platform can find various problems related to the information in time.
And 103, writing the binary number of the related information into the binary number of the original image by using the trained information writing model to generate a first image.
In this embodiment, the executing entity may write the binary number of the related information into the binary number of the original image by using the trained information writing model according to the binary number of the original image obtained in step 101 and the binary number of the related information obtained in step 102, and generate the first image corresponding to the original image. The information writing model is used for hiding relevant information in the original image, so that the visual effect of the original image is not changed. The first image may be approximated to the original image.
In some optional implementations of this embodiment, the information writing model is obtained by the following training method: acquiring a training sample set, wherein training samples in the training sample set comprise binary numbers of original images, binary numbers of related information and first images corresponding to the binary numbers of the original images and the binary numbers of the related information, and the related information is various information which is fed back by a full amount of users and related to abnormal problems in the original images; and training to obtain an information writing model by using a deep learning algorithm and taking the binary number of the original image and the binary number of the related information included in the training samples in the training sample set as input data and taking the first image corresponding to the input binary number of the original image and the binary number of the related information as expected output data. In the model training, problem point analysis is carried out on information fed back by a user and an original image, training is carried out by combining factors such as real-time performance of a page, user information, page data link or parameter request, key information for solving the problem is determined, and the key information is combined to form an information writing model. It should be noted that, a technician may set a model structure of the information writing model according to actual requirements, and the embodiment of the disclosure is not limited to this. Based on the deep learning technology, the accuracy of the whole system is improved, and the application range is expanded.
And 104, sending the first image to a service platform.
In this embodiment, the executing entity may send the generated first image to the service platform for use by the service platform.
It should be noted that the above binary number conversion and encoding are well-known technologies that are widely researched and applied at present, and are not described herein again.
With continued reference to fig. 2, the image processing method 200 of the present embodiment is executed in the mobile phone terminal 201. When the mobile phone terminal 201 acquires an original image intercepted by a user, binary number conversion is performed on the original image to generate a binary number 202 of the original image, the mobile phone terminal 201 first acquires related information of the original image, binary coding is performed on the related information to generate a binary number 203 of the related information, then the mobile phone terminal 201 writes the binary number of the related information into the binary number of the original image by using an information writing model obtained by training to generate a first image 204, and finally the mobile phone terminal 201 sends the first image to the service platform 205.
The image processing method provided by the embodiment of the application adopts the steps of obtaining an original image intercepted by a user, performing binary number conversion on the original image, generating a binary number of the original image, obtaining relevant information of the original image, performing binary coding on the relevant information, generating a binary number of the relevant information, writing the binary number of the relevant information into the binary number of the original image by using an information writing model obtained through training, and generating a first image, wherein the information writing model is used for hiding the relevant information in the original image, and sending the first image to a service platform.
With further reference to fig. 3, a schematic diagram 300 of a second embodiment of the image processing method is shown. The process of the method comprises the following steps:
step 301, obtaining an original image intercepted by a user, and performing binary number conversion on the original image to generate a binary number of the original image.
Step 302, acquiring related information of the original image, and performing binary coding on the related information to generate a binary number of the related information.
Step 303, extracting the RGB values of each pixel point of the original image from the binary number of the original image.
In this embodiment, the execution body may extract RGB values of each pixel point of the original image from binary numbers of the original image.
And 304, writing the binary number of the related information into the RGB value of each pixel point of the original image by using a writing algorithm, and generating the written binary number of the original image.
In this embodiment, the execution body may write the binary number of the related information into the RGB values of each pixel of the original image by using a writing algorithm without affecting the real effect of the image, and generate the written binary number of the original image. The writing algorithm is used for hiding relevant information in each pixel point of the original image.
In some optional implementation manners of this embodiment, writing binary numbers of the related information into RGB values of each pixel of the original image by using a writing algorithm, and generating the written binary numbers of the original image, including: zeroing the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image; and writing the binary number of the related information into the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image to generate the written binary number of the original image. And writing the related information of the page into the picture by using a Least Significant Bit (LSB) algorithm, so as to realize simple, convenient and quick positioning of the page information and find abnormal problems.
Specifically, the image pixels in the picture are generally composed of three primary colors (red, green and blue) of RGB, each color occupies 8 bits, and has a value range of 0x00 to 0xFF, that is, 256 colors, and after combination, the image pixels totally contain 256 powers of 3 colors, that is, 16777216 colors. The human eye can distinguish about 1000 thousands of different colors, which means that the human eye cannot distinguish about 6777216 of the remaining colors. The core of the LSB algorithm is: firstly, the last 1bit (namely the lowest bit) of the original image is set to zero without changing the visual effect of the image, then, the related information of the page represented by 1bit is assigned to the last 1bit of the original image, the information writing is realized, the original image is still not changed after the information writing, and an observer cannot see the existence of the page information at all.
Step 305, generating a first image according to the binary number of the written original image.
In this embodiment, the executing entity may generate the first image from the binary number of the written original image generated in step 304.
Step 306, sending the first image to the service platform.
In this embodiment, the specific operations of steps 301, 302, and 306 are substantially the same as the operations of steps 101, 102, and 104 in the embodiment shown in fig. 1, and are not described again here.
As can be seen from fig. 3, compared with the embodiment corresponding to fig. 1, the schematic diagram 300 of the image processing method in this embodiment adopts a method of extracting RGB values of each pixel point of the original image from the binary number of the original image, writes the binary number of the related information into the RGB values of each pixel point of the original image by using a writing algorithm, generates the written binary number of the original image, generates the first image according to the written binary number of the original image, and based on the RGB values of the image, the implementation is simpler, more convenient and wider, realizes another screenshot of a problematic page directed to user feedback, and performs simple, convenient and quick positioning of key information.
With further reference to fig. 4, the present application provides an image processing system, as shown in fig. 4, comprising: a client 401 and a service platform 402, wherein the client is used for executing the image processing method. The service platform is used for responding to the received first image sent by the client, performing binary number conversion on the first image and generating a binary number of the first image; and analyzing the binary number of the first image to obtain related information.
In the system, the content analysis is carried out on the binary number of the first image to obtain the relevant information, and the method comprises the following steps: extracting the RGB value of each pixel point of the first image from the binary number of the first image; extracting the RGB value of each pixel point of the first image by using an extraction algorithm to generate an extracted character string, wherein the extraction algorithm is matched with the writing algorithm; and generating related information according to the extracted character string.
In the system, the service platform is also used for analyzing the related information to obtain the abnormal problem of the original image; and determining a solution of the abnormal problem according to the abnormal problem.
The system realizes quick positioning of key information aiming at the screenshot of the problem page fed back by the user, tracks and solves the abnormal problem in time, reduces the negative influence on the company and saves the loss of the company.
With further reference to fig. 5, as an implementation of the method shown in fig. 1 to 3, the present application provides an embodiment of an image processing apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which is specifically applicable to various electronic devices.
As shown in fig. 5, the image processing apparatus 500 of the present embodiment includes: the image processing device comprises a first acquisition unit 501, a second acquisition unit 502, an information writing unit 503 and an image sending unit 504, wherein the first acquisition unit is configured to acquire an original image intercepted by a user, perform binary number conversion on the original image and generate a binary number of the original image; the second acquisition unit is configured to acquire relevant information of the original image, carry out binary coding on the relevant information and generate binary numbers of the relevant information, wherein the relevant information is used for representing various information related to abnormal problems in the original image; an information writing unit configured to write a binary number of the related information into a binary number of the original image by using the trained information writing model, and generate a first image, wherein the information writing model is used for hiding the related information in the original image; an image sending unit configured to send the first image to the service platform.
In this embodiment, specific processing of the first obtaining unit 501, the second obtaining unit 502, the information writing unit 503 and the image sending unit 504 of the image processing apparatus 500 and technical effects brought by the specific processing can refer to the related descriptions of step 101 to step 104 in the embodiment corresponding to fig. 1, and are not repeated herein.
In some optional implementations of this embodiment, the information writing model in the information writing unit is obtained by the following training method: acquiring a training sample set, wherein training samples in the training sample set comprise binary numbers of original images, binary numbers of related information and first images corresponding to the binary numbers of the original images and the binary numbers of the related information, and the related information is various information which is fed back by a full amount of users and related to abnormal problems in the original images; and training to obtain an information writing model by using a deep learning algorithm and taking the binary number of the original image and the binary number of the related information included in the training samples in the training sample set as input data and taking the first image corresponding to the input binary number of the original image and the binary number of the related information as expected output data.
In some optional implementations of this embodiment, the information writing unit includes: the extraction module is configured to extract RGB values of all pixel points of the original image from binary numbers of the original image; the writing module is configured to write the binary number of the related information into the RGB value of each pixel point of the original image by using a writing algorithm, and generate the written binary number of the original image, wherein the writing algorithm is used for hiding the related information in each pixel point of the original image; and the generating module is configured to generate a first image according to the binary number of the written original image.
In some optional implementations of this embodiment, the writing module includes: the initialization submodule is configured to zero the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image; and the writing submodule is configured to write the binary number of the relevant information into the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image, and generate the written binary number of the original image.
In some optional implementation manners of this embodiment, the related information in the second obtaining unit includes: at least one of an address of the user access page, a commodity number of the user access, and an order number of the user access.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, it is a block diagram of an electronic device according to the image processing method of the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.
The memory 602 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the image processing method provided by the present application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the image processing method provided by the present application.
The memory 602, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the image processing method in the embodiment of the present application (for example, the first acquisition unit 501, the second acquisition unit 502, the information writing unit 503, and the image transmission unit 504 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing, i.e., implements the image processing method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the image processing electronic apparatus, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, which may be connected to the image processing electronics via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the image processing method may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image processing electronics, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the original image intercepted by the user is obtained, binary number conversion is carried out on the original image, the binary number of the original image is generated, the related information of the original image is obtained, binary coding is carried out on the related information, the binary number of the related information is generated, the binary number of the related information is written into the binary number of the original image by using an information writing model obtained through training, and the first image is generated.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. A method of image processing, the method comprising:
acquiring an original image intercepted by a user, and performing binary number conversion on the original image to generate a binary number of the original image;
acquiring related information of the original image, carrying out binary coding on the related information, and generating a binary number of the related information, wherein the related information is used for representing various information related to abnormal problems in the original image;
writing the binary number of the related information into the binary number of the original image by using an information writing model obtained by training to generate a first image, wherein the information writing model is used for hiding the related information in the original image;
and sending the first image to a service platform.
2. The method of claim 1, wherein the information writing model is obtained by training:
acquiring a training sample set, wherein training samples in the training sample set comprise binary numbers of the original images, binary numbers of related information and first images corresponding to the binary numbers of the original images and the binary numbers of the related information, and the related information is various information which is fed back by a full amount of users and related to abnormal problems in the original images;
and training to obtain an information writing model by using a deep learning algorithm and taking the binary number of the original image and the binary number of the related information included in the training samples in the training sample set as input data and taking the first image corresponding to the input binary number of the original image and the binary number of the related information as expected output data.
3. The method of claim 1, wherein the writing the binary number of the related information into the binary number of the original image by using the trained information writing model to generate a first image comprises:
extracting the RGB value of each pixel point of the original image from the binary number of the original image;
writing the binary number of the related information into the RGB value of each pixel point of the original image by using a writing algorithm, and generating the written binary number of the original image, wherein the writing algorithm is used for hiding the related information in each pixel point of the original image;
and generating a first image according to the written binary number of the original image.
4. The method according to claim 3, wherein said writing, by using a writing algorithm, the binary number of the related information into the RGB values of the pixels of the original image to generate the written binary number of the original image comprises:
zeroing the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image;
and writing the binary number of the related information into the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image to generate the written binary number of the original image.
5. The method of claim 1, wherein the related information comprises: at least one of an address of the user access page, a commodity number accessed by the user, and an order number accessed by the user.
6. An image processing system, the system comprising: a client, wherein,
the client is adapted to perform the image processing method of any of claims 1 to 5.
7. The system of claim 6, wherein the system further comprises: a service platform;
the service platform is used for responding to a received first image sent by a client, performing binary number conversion on the first image and generating a binary number of the first image; and analyzing the binary number of the first image to obtain the related information.
8. The system of claim 7, wherein the parsing the binary number of the first image to obtain the related information comprises:
extracting the RGB value of each pixel point of the first image from the binary number of the first image;
extracting the RGB value of each pixel point of the first image by using an extraction algorithm to generate an extracted character string, wherein the extraction algorithm is matched with the writing algorithm;
and generating the related information according to the extracted character string.
9. The system of claim 7, wherein the service platform is further configured to analyze the related information to obtain an abnormal problem of the original image; and determining a solution of the abnormal problem according to the abnormal problem.
10. An image processing apparatus, the apparatus comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is configured to acquire an original image intercepted by a user, perform binary number conversion on the original image and generate a binary number of the original image;
the second acquisition unit is configured to acquire relevant information of the original image, carry out binary coding on the relevant information and generate binary numbers of the relevant information, wherein the relevant information is used for representing various information related to abnormal problems in the original image;
an information writing unit configured to write a binary number of the related information into a binary number of the original image by using a trained information writing model to generate a first image, wherein the information writing model is used for hiding the related information in the original image;
an image sending unit configured to send the first image to a service platform.
11. The apparatus of claim 10, wherein the information writing model in the information writing unit is obtained by training as follows:
acquiring a training sample set, wherein training samples in the training sample set comprise binary numbers of the original images, binary numbers of related information and first images corresponding to the binary numbers of the original images and the binary numbers of the related information, and the related information is various information which is fed back by a full amount of users and related to abnormal problems in the original images;
and training to obtain an information writing model by using a deep learning algorithm and taking the binary number of the original image and the binary number of the related information included in the training samples in the training sample set as input data and taking the first image corresponding to the input binary number of the original image and the binary number of the related information as expected output data.
12. The apparatus of claim 10, wherein the information writing unit comprises:
an extraction module configured to extract RGB values of respective pixel points of the original image from binary numbers of the original image;
a writing module configured to write the binary number of the related information into the RGB value of each pixel of the original image by using a writing algorithm, and generate the written binary number of the original image, wherein the writing algorithm is used to hide the related information in each pixel of the original image;
and the generating module is configured to generate a first image according to the written binary number of the original image.
13. The apparatus of claim 12, wherein the write module comprises:
the initialization submodule is configured to zero the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image;
and the writing submodule is configured to write the binary number of the relevant information into the last 1bit of the binary number corresponding to the RGB value of each pixel point of the original image, and generate the written binary number of the original image.
14. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
15. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202010863445.8A 2020-08-25 2020-08-25 Image processing method, system and device Pending CN112311952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010863445.8A CN112311952A (en) 2020-08-25 2020-08-25 Image processing method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010863445.8A CN112311952A (en) 2020-08-25 2020-08-25 Image processing method, system and device

Publications (1)

Publication Number Publication Date
CN112311952A true CN112311952A (en) 2021-02-02

Family

ID=74483792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010863445.8A Pending CN112311952A (en) 2020-08-25 2020-08-25 Image processing method, system and device

Country Status (1)

Country Link
CN (1) CN112311952A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114138398A (en) * 2022-02-07 2022-03-04 浙江口碑网络技术有限公司 Information feedback method and device
CN114155555A (en) * 2021-12-02 2022-03-08 北京中科智易科技有限公司 Human behavior artificial intelligence judgment system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463457A (en) * 2017-08-04 2017-12-12 深圳市华傲数据技术有限公司 The collection report method and device of a kind of application program feedback information
CN107835296A (en) * 2017-10-12 2018-03-23 无线生活(杭州)信息科技有限公司 A kind of problem feedback method and device
US10019716B1 (en) * 2013-11-21 2018-07-10 Google Llc Method for feedback submission resolution
CN109684177A (en) * 2018-12-26 2019-04-26 浙江口碑网络技术有限公司 Information feedback method and device
CN109840830A (en) * 2019-01-22 2019-06-04 北京顺丰同城科技有限公司 A kind of information feedback method and terminal based on order
CN110109798A (en) * 2019-03-19 2019-08-09 中国平安人寿保险股份有限公司 Application exception processing method, device, computer equipment and storage medium
CN110333981A (en) * 2019-05-28 2019-10-15 平安普惠企业管理有限公司 A kind of feedback method and device, electronic equipment of APP exception information
CN110688286A (en) * 2018-07-05 2020-01-14 北京三快在线科技有限公司 Application program operation information transmission method and device, storage medium and electronic equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019716B1 (en) * 2013-11-21 2018-07-10 Google Llc Method for feedback submission resolution
CN107463457A (en) * 2017-08-04 2017-12-12 深圳市华傲数据技术有限公司 The collection report method and device of a kind of application program feedback information
CN107835296A (en) * 2017-10-12 2018-03-23 无线生活(杭州)信息科技有限公司 A kind of problem feedback method and device
CN110688286A (en) * 2018-07-05 2020-01-14 北京三快在线科技有限公司 Application program operation information transmission method and device, storage medium and electronic equipment
CN109684177A (en) * 2018-12-26 2019-04-26 浙江口碑网络技术有限公司 Information feedback method and device
CN109840830A (en) * 2019-01-22 2019-06-04 北京顺丰同城科技有限公司 A kind of information feedback method and terminal based on order
CN110109798A (en) * 2019-03-19 2019-08-09 中国平安人寿保险股份有限公司 Application exception processing method, device, computer equipment and storage medium
CN110333981A (en) * 2019-05-28 2019-10-15 平安普惠企业管理有限公司 A kind of feedback method and device, electronic equipment of APP exception information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114155555A (en) * 2021-12-02 2022-03-08 北京中科智易科技有限公司 Human behavior artificial intelligence judgment system and method
CN114155555B (en) * 2021-12-02 2022-06-10 北京中科智易科技有限公司 Human behavior artificial intelligence judgment system and method
CN114138398A (en) * 2022-02-07 2022-03-04 浙江口碑网络技术有限公司 Information feedback method and device
CN114138398B (en) * 2022-02-07 2022-05-31 浙江口碑网络技术有限公司 Information feedback method and device

Similar Documents

Publication Publication Date Title
CN111626202B (en) Method and device for identifying video
CN114549935B (en) Information generation method and device
CN111783870A (en) Human body attribute identification method, device, equipment and storage medium
CN112001180A (en) Multi-mode pre-training model acquisition method and device, electronic equipment and storage medium
US20220027575A1 (en) Method of predicting emotional style of dialogue, electronic device, and storage medium
CN111539897A (en) Method and apparatus for generating image conversion model
CN112311952A (en) Image processing method, system and device
CN111680517A (en) Method, apparatus, device and storage medium for training a model
CN112507090A (en) Method, apparatus, device and storage medium for outputting information
CN111709875B (en) Image processing method, device, electronic equipment and storage medium
CN112149741A (en) Training method and device of image recognition model, electronic equipment and storage medium
CN111582477B (en) Training method and device for neural network model
CN111079449B (en) Method and device for acquiring parallel corpus data, electronic equipment and storage medium
CN111967304A (en) Method and device for acquiring article information based on edge calculation and settlement table
CN113923474B (en) Video frame processing method, device, electronic equipment and storage medium
CN112016523B (en) Cross-modal face recognition method, device, equipment and storage medium
CN112508964B (en) Image segmentation method, device, electronic equipment and storage medium
CN112560854A (en) Method, apparatus, device and storage medium for processing image
CN112116548A (en) Method and device for synthesizing face image
CN112508163B (en) Method and device for displaying subgraph in neural network model and storage medium
CN111507944B (en) Determination method and device for skin smoothness and electronic equipment
CN113988295A (en) Model training method, device, equipment and storage medium
CN112560678A (en) Expression recognition method, device, equipment and computer storage medium
CN112308127A (en) Method, apparatus, device and storage medium for processing data
CN111930356B (en) Method and device for determining picture format

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210202

RJ01 Rejection of invention patent application after publication