CN112084932A - Data processing method, device and equipment based on image recognition and storage medium - Google Patents

Data processing method, device and equipment based on image recognition and storage medium Download PDF

Info

Publication number
CN112084932A
CN112084932A CN202010927099.5A CN202010927099A CN112084932A CN 112084932 A CN112084932 A CN 112084932A CN 202010927099 A CN202010927099 A CN 202010927099A CN 112084932 A CN112084932 A CN 112084932A
Authority
CN
China
Prior art keywords
image
client
information
recognition
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010927099.5A
Other languages
Chinese (zh)
Other versions
CN112084932B (en
Inventor
杨元朴
赖博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202010927099.5A priority Critical patent/CN112084932B/en
Publication of CN112084932A publication Critical patent/CN112084932A/en
Application granted granted Critical
Publication of CN112084932B publication Critical patent/CN112084932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Bioethics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an artificial intelligence technology, and provides a data processing method, a device, equipment and a storage medium based on image recognition. The method comprises the steps of obtaining position information of a first client and a second client, reading response time limit of the second client from a preset mapping relation table based on the position information, and monitoring whether the second client uploads a second image within the response time limit or not in real time; if so, acquiring third position information corresponding to the second image, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain text information corresponding to the image, and feeding back corresponding prompt information to each client based on a comparison result of the first text information and the second text information and a comparison result of the first position information and the second position information. The method and the device can improve the accuracy of automatically verifying the authenticity of the third-party service.

Description

Data processing method, device and equipment based on image recognition and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a data processing method, a data processing device, data processing equipment and a storage medium based on image recognition.
Background
For a user who purchases a corresponding product, a service provider can entrust a third party to provide services for the user, such as vehicle rescue in the insurance industry, home maintenance in the home administration industry and the like, when the third party provides services for the user, the third party needs to upload an image for completing the services to the service provider, the service provider mostly manually verifies whether the third party has a behavior of making a fake or not according to the uploaded image, although the market has a technical scheme for automatically verifying the authenticity of the third party service, the schemes are mostly realized only based on a certain classification algorithm, and the technical problems of low accuracy, insufficient stability and the like exist.
Disclosure of Invention
In view of the above, the present invention provides a data processing method, device, apparatus and storage medium based on image recognition, and aims to solve the technical problems of low accuracy and poor stability in automatically verifying the authenticity of a third-party service in the prior art.
In order to achieve the above object, the present invention provides a data processing method based on image recognition, the method comprising:
receiving request information of data processing sent by a first client, and acquiring a first image carried in the request information;
responding to the request information sent by the first client to send a preset instruction to a second client, receiving confirmation information fed back by the second client, and monitoring whether the second client uploads a second image within a preset response time limit or not in real time;
when it is monitored that a second image is uploaded by the second client within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object;
and comparing the first text information, the second text information, the first position information and the second position information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client or a pre-associated third client based on the comparison result.
Preferably, the comparing the first text information, the second text information, the first location information, and the second location information to obtain a comparison result, and based on the comparison result, feeding back corresponding prompt information to the first client, the second client, or a third client set in advance in an associated manner includes:
judging whether the first text information and the second text information are the same;
if the first position information and the second position information are the same, judging whether the first position information and the second position information are the same;
and if the first prompt information is different, feeding back the first prompt information to the third client.
Preferably, the comparing the first text information, the second text information, the first location information, and the second location information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client, or a third client set in advance in an associated manner based on the comparison result further includes:
when the first position information is judged to be the same as the second position information, respectively acquiring first shooting time corresponding to a first image and second shooting time corresponding to a second image;
calculating a time difference between the first photographing time and the second photographing time;
and judging whether the difference value between the time difference and the response time limit is greater than a preset threshold value, and feeding back second prompt information to the second client when the difference value between the time difference and the response time limit is greater than the preset threshold value.
Preferably, the determining whether the difference between the time difference and the response time limit is greater than a preset threshold further includes:
and when the difference value between the time difference and the response time limit is judged to be less than or equal to a preset threshold value, adding the user identification corresponding to the second client to a preset data table.
Preferably, after the first image carried in the request information is acquired, the method further includes uploading the first image to a first node of a block chain to execute a second encryption process, where the second encryption process includes:
a first node of the block chain generates a key of an encryption algorithm, and encrypts a first image based on the encryption algorithm using the key to generate an encrypted first encrypted image;
encrypting the key by using a public key of an asymmetric key pair of the first node and a second node associated with the first node in the block chain to generate a corresponding key ciphertext;
and storing the first encrypted image and the key ciphertext as block chain data into a corresponding block of a block chain.
Preferably, after the first image carried in the request information is obtained, the method further includes performing a first encryption process on the first image, where the first encryption process includes:
decomposing the first image into a first approximation sub-band and a first detail sub-band in the time domain using a discrete wavelet transform;
carrying out pseudo-random encryption on the first approximate sub-band, and carrying out quantization processing to obtain a second approximate sub-band;
performing Arnold transform encryption on the first detail sub-band to obtain a second detail sub-band;
merging the second approximation subband data and the second detail subband data to obtain merged data;
and compressing the merged data according to Huffman coding to form a bit stream, and obtaining an encrypted first image based on the bit stream.
Preferably, when it is monitored that the second client uploads the second image within the response time limit, the track information of the second client within the response time limit is acquired, and the track information is fed back to the third client.
In order to achieve the above object, the present invention also provides an image recognition-based data processing apparatus, comprising:
a receiving module: the system comprises a first client, a second client and a server, wherein the first client is used for receiving request information of data processing sent by the first client and acquiring a first image carried in the request information;
an acquisition module: the system comprises a first client, a second client and a server, wherein the first client is used for responding to the request information sent by the first client and sending a preset instruction to the second client, receiving confirmation information fed back by the second client and monitoring whether the second client uploads a second image within a preset response time limit or not in real time;
a monitoring module: when it is monitored that a second image is uploaded by the second client within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object;
a feedback module: the first text information, the second text information, the first position information and the second position information are compared to obtain a comparison result, and corresponding prompt information is fed back to the first client, the second client or a pre-associated third client based on the comparison result.
In order to achieve the above object, the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the steps of the image recognition based data processing method as described above.
To achieve the above object, the present invention further provides a computer-readable storage medium including a storage data area storing data created according to use of a blockchain node and a storage program area storing a data processing program based on image recognition, which when executed by a processor implements any of the steps of the data processing method based on image recognition as described above.
According to the data processing method, the data processing device, the data processing equipment and the storage medium based on the image recognition, the authenticity of the third-party service is judged by associating the key information of the user and the third party, such as the photographing time, the photographing geographic position and the like, and the image is recognized by combining the optimization algorithm based on the deep learning model fusion, so that the image information can be recognized under the condition of not using additional supervision information, the image photographed in an outdoor complex environment can be recognized, the influence of dim light and noisy environment factors can be greatly reduced, the photographing requirement is lowered, the recognition accuracy is improved, and the false third-party service can be rapidly recognized.
Drawings
FIG. 1 is a diagram of an electronic device according to a preferred embodiment of the present invention;
FIG. 2 is a block diagram of a preferred embodiment of the data processing apparatus based on image recognition shown in FIG. 1;
FIG. 3 is a flow chart of a preferred embodiment of the data processing method based on image recognition according to the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an electronic device 1 according to a preferred embodiment of the invention.
The electronic device 1 includes but is not limited to: memory 11, processor 12, display 13, and network interface 14. The electronic device 1 is connected to a network through a network interface 14 to obtain raw data. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System for Mobile communications (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, or a communication network.
The memory 11 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the storage 11 may be an internal storage unit of the electronic device 1, such as a hard disk or a memory of the electronic device 1. In other embodiments, the memory 11 may also be an external storage device of the electronic device 1, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like equipped with the electronic device 1. Of course, the memory 11 may also comprise both an internal memory unit and an external memory device of the electronic device 1. In this embodiment, the memory 11 is generally used for storing an operating system installed in the electronic device 1 and various types of application software, such as program codes of the data processing program 10 based on image recognition. Further, the memory 11 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 12 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 12 is typically used for controlling the overall operation of the electronic device 1, such as performing data interaction or communication related control and processing. In this embodiment, the processor 12 is configured to run the program code stored in the memory 11 or process data, for example, run the program code of the data processing program 10 based on image recognition.
The display 13 may be referred to as a display screen or display unit. In some embodiments, the display 13 may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display 13 is used for displaying information processed in the electronic device 1 and for displaying a visual work interface, e.g. displaying the results of data statistics.
The network interface 14 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), the network interface 14 typically being used for establishing a communication connection between the electronic device 1 and other electronic devices.
Fig. 1 only shows the electronic device 1 with components 11-14 and the image recognition based data processing program 10, but it is to be understood that not all shown components are required to be implemented, and that more or less components may alternatively be implemented.
Optionally, the electronic device 1 may further comprise a user interface, the user interface may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may further comprise a standard wired interface and a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an Organic Light-Emitting Diode (OLED) touch screen, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
The electronic device 1 may further include a Radio Frequency (RF) circuit, a sensor, an audio circuit, and the like, which are not described in detail herein.
In the above embodiment, the processor 12, when executing the data processing program 10 based on image recognition stored in the memory 11, may implement the following steps:
receiving request information of data processing sent by a first client, and acquiring a first image carried in the request information;
responding to the request information sent by the first client to send a preset instruction to a second client, receiving confirmation information fed back by the second client, and monitoring whether the second client uploads a second image within a preset response time limit or not in real time;
when it is monitored that a second image is uploaded by the second client within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object;
and comparing the first text information, the second text information, the first position information and the second position information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client or a pre-associated third client based on the comparison result.
The storage device may be the memory 11 of the electronic device 1, or may be another storage device communicatively connected to the electronic device 1.
For detailed description of the above steps, please refer to the following description of fig. 2 regarding a functional block diagram of an embodiment of the data processing apparatus 100 based on image recognition and fig. 3 regarding a flowchart of an embodiment of a data processing method based on image recognition.
Referring to fig. 2, a functional block diagram of the data processing apparatus 100 based on image recognition according to the present invention is shown.
The data processing device 100 based on image recognition according to the present invention can be installed in an electronic device. According to the implemented functions, the data processing device 100 based on image recognition may include a receiving module 110, an obtaining module 120, a monitoring module 130 and a feedback module 140. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the receiving module 110 is configured to receive request information for data processing sent by a first client, and acquire a first image carried in the request information.
In this embodiment, the non-accident rescue service in the insurance industry is used to describe the scheme, the receiving module receives request information for data processing sent by the first client, where the request may be a request triggered on the client when the user needs the non-accident rescue service, when the user triggers a rescue request on the client, a live image needs to be captured and uploaded, and after a first image uploaded by the first client is acquired, the image is analyzed to acquire information such as geographical location information, capturing time, case number and the like associated with the image. Wherein the first client may be a user-installed APP.
In an embodiment, in order to prevent a user from falsifying information of an image through plug-in software, the first image may be encrypted, so that the information of the image is difficult to falsify, and the risk of image information leakage is reduced, specifically:
decomposing the first image into a first approximation sub-band and a first detail sub-band in the time domain using a discrete wavelet transform;
carrying out pseudo-random encryption on the first approximate sub-band, and carrying out quantization processing to obtain a second approximate sub-band;
performing Arnold transform encryption on the first detail sub-band to obtain a second detail sub-band;
merging the second approximation subband data and the second detail subband data to obtain merged data;
and compressing the merged data according to Huffman coding to form a bit stream, and obtaining an encrypted first image based on the bit stream.
In one embodiment, to further ensure the security of the first image, the encrypted first image, and the corresponding location information and shooting time of the first image may be stored in a node of a block chain. Or uploading the shot first image to a first node of the block chain for encryption, specifically:
a first node of the block chain generates a key of an encryption algorithm, and encrypts a first image based on the encryption algorithm using the key to generate an encrypted first encrypted image;
encrypting the key by using a public key of an asymmetric key pair of the first node and a second node associated with the first node in the block chain to generate a corresponding key ciphertext;
and storing the first encrypted image and the key ciphertext as block chain data into a corresponding block of a block chain.
It is to be understood that the first node is not limited to a specific one of the nodes, nor to the processing of one of the nodes. For example, when the node a encrypts the first image, the node a belongs to the first node; and when the node C carries out encryption processing on the first image in the next encryption processing, the node C belongs to the first node.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. It is essentially a decentralized database, a string of data blocks associated by cryptography, each data block containing information of a batch of network transactions for verifying the validity (anti-counterfeiting) of the information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The obtaining module 120 is configured to send a preset instruction to a second client in response to the request information sent by the first client, receive confirmation information fed back by the second client, and monitor whether the second client uploads a second image within a preset response time limit in real time.
In this embodiment, when image information uploaded by a first client is received, a preset instruction is sent to a second client, the second client may be an APP installed by a rescuer, when response information of the second client based on the preset instruction is received, location information where the second client is located is obtained, in order to prevent the second client, that is, the rescuer, from cooperating with a situation where the first client makes a fake, a mapping relation table is established in advance, response time limits corresponding to each location distance (that is, a distance between a location associated with a first image and a location where the second client is located) are stored in the mapping relation table, and if the second client uploads the second image before the response time limits, it is indicated that there is a possibility of the fake. And reading the response time limit of the second client from a preset mapping relation table according to the associated position information of the first image and the position information of the second client, monitoring whether the second client uploads the second image in the response time limit in real time, and sending preset prompt information to the second client to remind a rescuer of rescuing as soon as possible when the situation that the second client uploads the second image in the response time limit is not monitored, wherein the second image is an image for the rescuer to transport the accident vehicle to a trailer.
A monitoring module 130, configured to, when it is monitored that the second client uploads a second image within the response time limit, input the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, where the first recognition information includes first text information and first position information of a first target object, and the second recognition information includes second text information and second position information of a second target object.
In this embodiment, when it is monitored that the second image is uploaded by the second client within the response time limit, the first image and the second image are respectively input into a pre-trained image recognition model to obtain first recognition information of the first image and second recognition information of the second image, where the first recognition information includes first text information and first position information of the first object, the second recognition information includes second text information and second position information of the second object, the first text information and the second text information may be license plate numbers of a trailer and an accident vehicle, and the second position information is used for comparing with the first position information to further determine whether a rescuer corresponding to the second client reaches the position of the user who initiates the rescue request.
The pre-trained image recognition model is an optimization algorithm of deep learning model fusion, and the specific flow of the image recognition model in recognizing the image is as follows:
a1: extracting image characteristics V1 from the image by using a convolutional neural network;
a2: establishing a visual attention model, and inputting the image characteristics V1 obtained in the step A1 into the visual attention model to obtain image characteristics V2 processed by the visual attention model; if the character is detected to exist, inputting the image feature V1 of the step A1 and the character Wt-1 generated by inputting the pre-established semantic attention model at the time of t-1 into the visual attention model to obtain an image feature V2 processed by the visual attention model;
a3: establishing a first long-short term memory network, wherein the first long-short term memory network is used by a visual attention model, and inputting a hidden layer state of the first long-short term memory network at the t-1 moment and an image characteristic V2 processed by the visual attention model into the first long-short term memory network to obtain a character Wt' generated by the visual attention model at the t moment;
a4: inputting characters Wt' generated by the visual attention model at the time t and a predefined label A into a pre-established semantic attention model together to obtain semantic information Et generated by the semantic attention model at the time t;
a5: establishing a second long-short term memory network, wherein the second long-short term memory network is used by the semantic attention model, and the state of a hidden layer of the second long-short term memory network at the time t-1 and semantic information Et generated by the semantic attention model at the time t are input into the second long-short term memory network to obtain a character Wt generated by the semantic attention model at the time t;
a6: judging whether an instruction for stopping recognition is detected, and if so, splicing and combining all the obtained characters to obtain text information; if not, the character Wt obtained in the step A5 is used for updating the Wt-1 in the step A2, meanwhile, the step A2 is returned, and the steps A2-A5 are continuously executed until the command of stopping recognition is detected.
The image is identified by the optimization algorithm based on the deep learning model fusion, a single character can be identified without additional supervision information, the influence of dim light and noisy environment factors can be reduced when the image shot in an outdoor complex environment is identified, and the identification accuracy is improved
In an embodiment, the monitoring module is further configured to, when it is monitored that the second client uploads the second image within the response time limit, acquire trajectory information of the second client within the response time limit, and feed back the trajectory information to the third client.
A feedback module 140, configured to compare the first text information, the second text information, the first location information, and the second location information to obtain a comparison result, and based on the comparison result, feed back corresponding prompt information to the first client, the second client, or a third client that is set in advance in an associated manner.
In this embodiment, a first text message, a second text message, a first location message, and a second location message are compared to obtain a comparison result, and a corresponding prompt message is fed back to a first client, a second client, or a third client set in advance in association based on the comparison result, specifically, whether the first text message and the second text message are the same is determined, if so, whether the first location message and the second location message are the same is determined, and if not, a first prompt message is fed back to the third client.
Judging whether the text information of the first image and the text information of the second image are the same, when the two images recognize that the information (such as the plate numbers) are different, indicating that the vehicle shot by the user initiating the rescue request is not the same as the vehicle for rescuing by the rescuers, sending first preset prompt information (such as warning information) to a first client corresponding to the user and a second client corresponding to the rescuers, storing the warning information and the identifications of the user and the rescuers into a preset database to record the counterfeiting behavior, when the text information of the first image and the text information of the second image are the same, indicating that the vehicle shot by the user and the vehicle for rescuing by the rescuers are the same vehicle, and judging whether the first position information and the second position information are the same for further verifying the authenticity of the rescue condition, if the first preset prompt information is different from the second preset prompt information, the first preset prompt information is sent to a predetermined third client, and in this embodiment, the third client may be an APP installed by a service provider and used for monitoring the rescue situation of a rescue party.
In an embodiment, the apparatus further includes a determining module, configured to, when it is determined that the first position information is the same as the second position information, respectively obtain a first shooting time corresponding to a first image and a second shooting time corresponding to a second image, calculate a time difference between the first shooting time and the second shooting time, determine whether a difference between the time difference and the response time limit is greater than a preset threshold, and when it is determined that the difference between the time difference and the response time limit is greater than the preset threshold, feed back second prompt information to the second client.
The second image is an image shot when the rescue party puts the accident vehicle on the trailer, namely the image shot when the rescue is completed, the time difference between the second image and the first image is calculated to obtain the time length from the initiation of the rescue request to the completion of the rescue of the user, whether the difference value between the time length from the completion of the rescue and the response time limit is greater than a preset threshold value or not is judged, and if the difference value is greater than the preset threshold value, the rescue takes longer time, so that related prompt information can be sent to the second client.
Further, the judging module is further configured to add the user identifier corresponding to the second client to a preset data table when the difference between the time difference and the response time limit is judged to be less than or equal to a preset threshold. When the difference between the time difference and the response time limit is smaller than or equal to the preset threshold, it indicates that the rescue fee completes the rescue within the specified time, so that the user identifier (for example, ID of a rescuer) corresponding to the second client may be added to the preset data table, where the preset data table may refer to a list of users that complete the rescue within the specified time.
In addition, the invention also provides a data processing method based on image recognition. Fig. 3 is a schematic method flow diagram of an embodiment of the data processing method based on image recognition according to the present invention. The processor 12 of the electronic device 1, when executing the data processing program 10 based on image recognition stored in the memory 11, implements the following steps of the data processing method based on image recognition:
step S10: receiving request information of data processing sent by a first client, and acquiring a first image carried in the request information.
In this embodiment, an electronic device is used as a server, a non-accident rescue service in the insurance industry is used to describe the scheme, the server receives request information for data processing sent by a first client, the request can be a request triggered on the client when a user needs the non-accident rescue service, when the user triggers a rescue request on the client, a live image needs to be shot and uploaded, and the server obtains a first image uploaded by the first client, analyzes the image, and obtains information such as geographic position information, shooting time, case number and the like of the image. Wherein the first client may be a user-installed APP.
In an embodiment, in order to prevent a user from falsifying information of an image through plug-in software, the first image may be encrypted, so that the information of the image is difficult to falsify, and the risk of image information leakage is reduced, specifically:
decomposing the first image into a first approximation sub-band and a first detail sub-band in the time domain using a discrete wavelet transform;
carrying out pseudo-random encryption on the first approximate sub-band, and carrying out quantization processing to obtain a second approximate sub-band;
performing Arnold transform encryption on the first detail sub-band to obtain a second detail sub-band;
merging the second approximation subband data and the second detail subband data to obtain merged data;
and compressing the merged data according to Huffman coding to form a bit stream, and obtaining an encrypted first image based on the bit stream.
In one embodiment, to further ensure the security of the first image, the encrypted first image, and the corresponding location information and shooting time of the first image may be stored in a node of a block chain. Or uploading the shot first image to a first node of the block chain for encryption, specifically:
a first node of the block chain generates a key of an encryption algorithm, and encrypts a first image based on the encryption algorithm using the key to generate an encrypted first encrypted image;
encrypting the key by using a public key of an asymmetric key pair of the first node and a second node associated with the first node in the block chain to generate a corresponding key ciphertext;
and storing the first encrypted image and the key ciphertext as block chain data into a corresponding block of a block chain.
It is to be understood that the first node is not limited to a specific one of the nodes, nor to the processing of one of the nodes. For example, when the node a encrypts the first image, the node a belongs to the first node; and when the node C carries out encryption processing on the first image in the next encryption processing, the node C belongs to the first node.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. It is essentially a decentralized database, a string of data blocks associated by cryptography, each data block containing information of a batch of network transactions for verifying the validity (anti-counterfeiting) of the information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Step S20: responding to the request information sent by the first client to send a preset instruction to a second client, receiving confirmation information fed back by the second client, and monitoring whether the second client uploads a second image within a preset response time limit or not in real time.
In this embodiment, when receiving image information uploaded by a first client, a server sends a preset instruction to a second client, where the second client may be an APP installed by a rescuer, and when receiving response information of the second client based on the preset instruction, the server obtains location information of the second client, and in order to prevent the second client, i.e., the rescuer, from cooperating with a situation of counterfeiting by the first client, a mapping relation table is pre-established, where a response time limit corresponding to each location distance (i.e., a distance between a location associated with a first image and a location where the second client is located) is stored in the mapping relation table, and if the second client uploads the second image before the response time limit, it is described that there is a possibility of counterfeiting. And reading the response time limit of the second client from a preset mapping relation table according to the first position information and the position information of the second client, monitoring whether the second client uploads a second image in the response time limit in real time, and sending preset prompt information to the second client to remind a rescuer of rescue as soon as possible when the situation that the second client uploads the second image in the response time limit is not monitored, wherein the second image refers to an image for the rescuer to transport the accident vehicle to a trailer.
Step S30: when it is monitored that the second client uploads a second image within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object.
In this embodiment, when it is monitored that the second image is uploaded by the second client within the response time limit, the first image and the second image are respectively input into a pre-trained image recognition model to obtain first recognition information of the first image and second recognition information of the second image, where the first recognition information includes first text information and first position information of the first object, the second recognition information includes second text information and second position information of the second object, the first text information and the second text information may be license plate numbers of a trailer and an accident vehicle, and the second position information is used for comparing with the first position information to further determine whether a rescuer corresponding to the second client reaches the position of the user who initiates the rescue request.
The pre-trained image recognition model is an optimization algorithm of deep learning model fusion, and the specific flow of the image recognition model in recognizing the image is as follows:
a1: extracting image characteristics V1 from the image by using a convolutional neural network;
a2: establishing a visual attention model, and inputting the image characteristics V1 obtained in the step A1 into the visual attention model to obtain image characteristics V2 processed by the visual attention model; if the character is detected to exist, inputting the image feature V1 of the step A1 and the character Wt-1 generated by inputting the pre-established semantic attention model at the time of t-1 into the visual attention model to obtain an image feature V2 processed by the visual attention model;
a3: establishing a first long-short term memory network, wherein the first long-short term memory network is used by a visual attention model, and inputting a hidden layer state of the first long-short term memory network at the t-1 moment and an image characteristic V2 processed by the visual attention model into the first long-short term memory network to obtain a character Wt' generated by the visual attention model at the t moment;
a4: inputting characters Wt' generated by the visual attention model at the time t and a predefined label A into a pre-established semantic attention model together to obtain semantic information Et generated by the semantic attention model at the time t;
a5: establishing a second long-short term memory network, wherein the second long-short term memory network is used by the semantic attention model, and the state of a hidden layer of the second long-short term memory network at the time t-1 and semantic information Et generated by the semantic attention model at the time t are input into the second long-short term memory network to obtain a character Wt generated by the semantic attention model at the time t;
a6: judging whether an instruction for stopping recognition is detected, and if so, splicing and combining all the obtained characters to obtain text information; if not, the character Wt obtained in the step A5 is used for updating the Wt-1 in the step A2, meanwhile, the step A2 is returned, and the steps A2-A5 are continuously executed until the command of stopping recognition is detected.
The image is identified by the optimization algorithm based on the deep learning model fusion, a single character can be identified without additional supervision information, the influence of dim light and noisy environment factors can be reduced when the image shot in an outdoor complex environment is identified, and the identification accuracy is improved
In one embodiment, when it is monitored that the second client uploads the second image within the response time limit, track information of the second client within the response time limit is obtained, and the track information is fed back to the third client. Whether the second client reaches the position associated with the first image within the response time limit can be further judged through the track information.
Step S40: and comparing the first text information, the second text information, the first position information and the second position information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client or a pre-associated third client based on the comparison result.
In this embodiment, a first text message, a second text message, a first location message, and a second location message are compared to obtain a comparison result, and a corresponding prompt message is fed back to a first client, a second client, or a third client set in advance in association based on the comparison result, specifically, whether the first text message and the second text message are the same is determined, if so, whether the first location message and the second location message are the same is determined, and if not, a first prompt message is fed back to the third client.
Judging whether the text information of the first image and the text information of the second image are the same, when the two image identification information (such as the license plate numbers) are different, indicating that the vehicle shot by the user initiating the rescue request is not the same as the vehicle for rescuing by rescuers, sending first preset prompt information (such as warning information) to a first client corresponding to the user and a second client corresponding to the rescuers, storing the warning information and the identifications of the user and the rescuers into a preset database to record the counterfeiting behavior, when the text information of the first image and the text information of the second image are the same, indicating that the vehicle shot by the user and the vehicle for rescuing by the rescuers are the same vehicle, and judging whether the first position information and the second position information are the same or not for further verifying the authenticity of the rescue situation, if the first position information and the second position information are different, and sending first preset prompt information to a predetermined third client, where in this embodiment, the third client may be an APP installed by a service provider and is used to monitor the rescue condition of the rescue party.
In one embodiment, the method further includes a determining step of, when it is determined that the first position information and the second position information are the same, respectively obtaining a first shooting time corresponding to a first image and a second shooting time corresponding to a second image, calculating a time difference between the first shooting time and the second shooting time, determining whether a difference between the time difference and the response time limit is greater than a preset threshold, and when it is determined that the difference between the time difference and the response time limit is greater than the preset threshold, sending a second preset prompt message to the second client.
The second image is an image shot when the rescue party puts the accident vehicle on the trailer, namely the image shot when the rescue is completed, the time difference between the second image and the first image is calculated to obtain the time length from the initiation of the rescue request to the completion of the rescue of the user, whether the difference value between the time length from the completion of the rescue and the response time limit is greater than a preset threshold value or not is judged, and if the difference value is greater than the preset threshold value, the rescue takes longer time, so that related prompt information can be sent to the second client.
Further, the method further comprises the step of adding the user identifier corresponding to the second client to a preset data table when the difference value between the time difference and the response time limit is judged to be smaller than or equal to a preset threshold value. When the difference between the time difference and the response time limit is smaller than or equal to the preset threshold, it indicates that the rescue fee completes the rescue within the specified time, so that the user identifier (for example, ID of a rescuer) corresponding to the second client may be added to the preset data table, where the preset data table may refer to a list of users that complete the rescue within the specified time.
Furthermore, the embodiment of the present invention also provides a computer-readable storage medium, which may be any one or any combination of a hard disk, a multimedia card, an SD card, a flash memory card, an SMC, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, and the like. The computer readable storage medium comprises a storage data area and a storage program area, the storage data area stores data created according to the use of the block chain node, the storage program area stores the data processing program 10 based on image recognition, and the data processing program 10 based on image recognition realizes the following operations when being executed by a processor:
receiving request information of data processing sent by a first client, and acquiring a first image carried in the request information;
responding to the request information sent by the first client to send a preset instruction to a second client, receiving confirmation information fed back by the second client, and monitoring whether the second client uploads a second image within a preset response time limit or not in real time;
when it is monitored that a second image is uploaded by the second client within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object;
and comparing the first text information, the second text information, the first position information and the second position information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client or a pre-associated third client based on the comparison result.
In another embodiment, in order to further ensure the privacy and security of all the appearing data, all the data may be stored in a node of a block chain. Such as the first image, the second image, the position information, and the photographing time, etc., which can be stored in the block link point.
It should be noted that the blockchain in the present invention is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The specific implementation of the computer readable storage medium of the present invention is substantially the same as the specific implementation of the data processing method based on image recognition, and will not be described herein again.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention essentially or contributing to the prior art can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above and includes several instructions for enabling a terminal device (such as a mobile phone, a computer, an electronic device, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A data processing method based on image recognition is applied to electronic equipment, and is characterized in that the method comprises the following steps:
receiving request information of data processing sent by a first client, and acquiring a first image carried in the request information;
responding to the request information sent by the first client to send a preset instruction to a second client, receiving confirmation information fed back by the second client, and monitoring whether the second client uploads a second image within a preset response time limit or not in real time;
when it is monitored that a second image is uploaded by the second client within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object;
and comparing the first text information, the second text information, the first position information and the second position information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client or a pre-associated third client based on the comparison result.
2. The data processing method based on image recognition according to claim 1, wherein the comparing the first text information, the second text information, the first location information, and the second location information to obtain a comparison result, and the feeding back the corresponding prompt information to the first client, the second client, or a pre-associated third client based on the comparison result comprises:
judging whether the first text information and the second text information are the same;
if the first position information and the second position information are the same, judging whether the first position information and the second position information are the same;
and if the first prompt information is different, feeding back the first prompt information to the third client.
3. The data processing method based on image recognition as claimed in claim 2, wherein the comparing the first text information, the second text information, the first location information and the second location information to obtain a comparison result, and feeding back corresponding prompt information to the first client, the second client or a pre-associated third client based on the comparison result further comprises:
when the first position information is judged to be the same as the second position information, respectively acquiring first shooting time corresponding to a first image and second shooting time corresponding to a second image;
calculating a time difference between the first photographing time and the second photographing time;
and judging whether the difference value between the time difference and the response time limit is greater than a preset threshold value, and feeding back second prompt information to the second client when the difference value between the time difference and the response time limit is greater than the preset threshold value.
4. The image recognition-based data processing method of claim 3, wherein the determining whether the difference between the time difference and the response time limit is greater than a preset threshold further comprises:
and when the difference value between the time difference and the response time limit is judged to be less than or equal to a preset threshold value, adding the user identification corresponding to the second client to a preset data table.
5. The data processing method based on image identification according to any one of claims 1 to 4, wherein after the acquiring the first image carried in the request information, the method further comprises uploading the first image to a first node of a block chain to perform a second encryption process, and the second encryption process comprises:
a first node of the block chain generates a key of an encryption algorithm, and encrypts a first image based on the encryption algorithm using the key to generate an encrypted first encrypted image;
encrypting the key by using a public key of an asymmetric key pair of the first node and a second node associated with the first node in the block chain to generate a corresponding key ciphertext;
and storing the first encrypted image and the key ciphertext as block chain data into a corresponding block of a block chain.
6. The data processing method based on image recognition according to any one of claims 1 to 4, wherein after the acquiring the first image carried in the request information, the method further comprises performing a first encryption process on the first image, and the first encryption process comprises:
decomposing the first image into a first approximation sub-band and a first detail sub-band in the time domain using a discrete wavelet transform;
carrying out pseudo-random encryption on the first approximate sub-band, and carrying out quantization processing to obtain a second approximate sub-band;
performing Arnold transform encryption on the first detail sub-band to obtain a second detail sub-band;
merging the second approximation subband data and the second detail subband data to obtain merged data;
and compressing the merged data according to Huffman coding to form a bit stream, and obtaining an encrypted first image based on the bit stream.
7. The data processing method based on image recognition as claimed in claim 1, wherein when it is monitored that the second client uploads the second image within the response time limit, the trajectory information of the second client within the response time limit is obtained and fed back to the third client.
8. A data processing apparatus based on image recognition, the apparatus comprising:
a receiving module: the system comprises a first client, a second client and a server, wherein the first client is used for receiving request information of data processing sent by the first client and acquiring a first image carried in the request information;
an acquisition module: the system comprises a first client, a second client and a server, wherein the first client is used for responding to the request information sent by the first client and sending a preset instruction to the second client, receiving confirmation information fed back by the second client and monitoring whether the second client uploads a second image within a preset response time limit or not in real time;
a monitoring module: when it is monitored that a second image is uploaded by the second client within the response time limit, inputting the first image and the second image into a pre-trained image recognition model respectively to obtain first recognition information of the first image and second recognition information of the second image, wherein the first recognition information comprises first text information and first position information of a first target object, and the second recognition information comprises second text information and second position information of a second target object;
a feedback module: the first text information, the second text information, the first position information and the second position information are compared to obtain a comparison result, and corresponding prompt information is fed back to the first client, the second client or a pre-associated third client based on the comparison result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image recognition-based data processing method of any one of claims 1 to 7.
10. A computer-readable storage medium, comprising a stored data area and a stored program area, wherein the stored data area stores data created according to the use of a blockchain node, and the stored program area stores a data processing program based on image recognition, and when the data processing program based on image recognition is executed by a processor, the steps of the data processing method based on image recognition according to any one of claims 1 to 7 are implemented.
CN202010927099.5A 2020-09-07 2020-09-07 Data processing method, device, equipment and storage medium based on image recognition Active CN112084932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010927099.5A CN112084932B (en) 2020-09-07 2020-09-07 Data processing method, device, equipment and storage medium based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010927099.5A CN112084932B (en) 2020-09-07 2020-09-07 Data processing method, device, equipment and storage medium based on image recognition

Publications (2)

Publication Number Publication Date
CN112084932A true CN112084932A (en) 2020-12-15
CN112084932B CN112084932B (en) 2023-08-08

Family

ID=73733209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010927099.5A Active CN112084932B (en) 2020-09-07 2020-09-07 Data processing method, device, equipment and storage medium based on image recognition

Country Status (1)

Country Link
CN (1) CN112084932B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114936354A (en) * 2022-05-20 2022-08-23 浙江云程信息科技有限公司 Information processing method and device for engineering supervision
CN116198235A (en) * 2023-03-10 2023-06-02 广东铭钰科技股份有限公司 Coding auxiliary positioning detection equipment and method based on visual processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120148101A1 (en) * 2010-12-14 2012-06-14 Electronics And Telecommunications Research Institute Method and apparatus for extracting text area, and automatic recognition system of number plate using the same
CN106982372A (en) * 2016-01-15 2017-07-25 中国移动通信集团福建有限公司 Image processing method and equipment
CN110070338A (en) * 2019-04-16 2019-07-30 安徽博诺思信息科技有限公司 A kind of mobile Work attendance method and system based on cloud computing
CN110516664A (en) * 2019-08-16 2019-11-29 咪咕数字传媒有限公司 Bill identification method and device, electronic equipment and storage medium
CN110990801A (en) * 2019-11-29 2020-04-10 深圳市商汤科技有限公司 Information verification method and device, electronic equipment and storage medium
WO2020140608A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Image data processing method, apparatus, and computer readable storage medium
CN111476275A (en) * 2020-03-17 2020-07-31 深圳壹账通智能科技有限公司 Target detection method based on picture recognition, server and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120148101A1 (en) * 2010-12-14 2012-06-14 Electronics And Telecommunications Research Institute Method and apparatus for extracting text area, and automatic recognition system of number plate using the same
CN106982372A (en) * 2016-01-15 2017-07-25 中国移动通信集团福建有限公司 Image processing method and equipment
WO2020140608A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Image data processing method, apparatus, and computer readable storage medium
CN110070338A (en) * 2019-04-16 2019-07-30 安徽博诺思信息科技有限公司 A kind of mobile Work attendance method and system based on cloud computing
CN110516664A (en) * 2019-08-16 2019-11-29 咪咕数字传媒有限公司 Bill identification method and device, electronic equipment and storage medium
CN110990801A (en) * 2019-11-29 2020-04-10 深圳市商汤科技有限公司 Information verification method and device, electronic equipment and storage medium
CN111476275A (en) * 2020-03-17 2020-07-31 深圳壹账通智能科技有限公司 Target detection method based on picture recognition, server and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114936354A (en) * 2022-05-20 2022-08-23 浙江云程信息科技有限公司 Information processing method and device for engineering supervision
CN114936354B (en) * 2022-05-20 2023-02-17 浙江云程信息科技有限公司 Information processing method and device for engineering supervision
CN116198235A (en) * 2023-03-10 2023-06-02 广东铭钰科技股份有限公司 Coding auxiliary positioning detection equipment and method based on visual processing
CN116198235B (en) * 2023-03-10 2024-01-23 广东铭钰科技股份有限公司 Coding auxiliary positioning detection equipment and method based on visual processing

Also Published As

Publication number Publication date
CN112084932B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN109902575B (en) Anti-walking method and device based on unmanned vehicle and related equipment
CN108124480B (en) Software authorization method, system and equipment
CN112084932B (en) Data processing method, device, equipment and storage medium based on image recognition
US10091196B2 (en) Method and apparatus for authenticating user by using information processing device
RU2016148406A (en) CHECKING IMAGES CAPTURED USING A TEMPORARY LABEL DECODED FROM LIGHTING FROM A MODULATED LIGHT SOURCE
CN109657107B (en) Terminal matching method and device based on third-party application
CN111882233A (en) Storage risk early warning method, system and device based on block chain and storage medium
WO2015039589A1 (en) User identity authorization system and authorization method based on bar codes
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN111931153B (en) Identity verification method and device based on artificial intelligence and computer equipment
CN106503527A (en) A kind of method and apparatus of electronic document fingerprint signature
CN107302434B (en) Method and system for checking electronic signature
CN114004639B (en) Method, device, computer equipment and storage medium for recommending preferential information
CN111224865B (en) User identification method based on payment session, electronic device and storage medium
CN111541692B (en) Identity verification method, system, device and equipment
CN111198862A (en) File storage method and device based on block chain, terminal equipment and medium
CN112511632A (en) Object pushing method, device and equipment based on multi-source data and storage medium
CN112053343A (en) User picture data processing method and device, computer equipment and storage medium
US20220293123A1 (en) Systems and methods for authentication using sound-based vocalization analysis
CN114466362B (en) Method and device for filtering junk short messages under 5G communication based on BilSTM
CN115456812A (en) Intelligent construction site management method, device, equipment and medium
CN115424350A (en) Method for identifying violation behavior, computer device and computer readable storage medium
CN115798003A (en) Identity checking method, equipment and storage medium
CN112085469B (en) Data approval method, device, equipment and storage medium based on vector machine model
CN114549221A (en) Vehicle accident loss processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant