CN111382296A - Data processing method, device, terminal and storage medium - Google Patents

Data processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111382296A
CN111382296A CN201811629084.XA CN201811629084A CN111382296A CN 111382296 A CN111382296 A CN 111382296A CN 201811629084 A CN201811629084 A CN 201811629084A CN 111382296 A CN111382296 A CN 111382296A
Authority
CN
China
Prior art keywords
target
image data
image
micro
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811629084.XA
Other languages
Chinese (zh)
Other versions
CN111382296B (en
Inventor
刘希
邓裕琳
尹鹏
王成
邓志伟
麦继升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201811629084.XA priority Critical patent/CN111382296B/en
Publication of CN111382296A publication Critical patent/CN111382296A/en
Application granted granted Critical
Publication of CN111382296B publication Critical patent/CN111382296B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application provides a data processing method, a data processing device, a terminal and a storage medium, wherein the method comprises the following steps: acquiring attribute information of target image data; adopting the attribute information to perform blocking processing on the target image data to obtain N image data blocks, wherein N is a positive integer; determining the micro-service corresponding to each image data block in the N image data blocks to obtain M target micro-services, wherein the target micro-services correspond to at least one image data block, and M is a positive integer less than or equal to N; and sending the N image data blocks to corresponding target micro-services in the M target micro-services, so that the efficiency in data processing can be improved.

Description

Data processing method, device, terminal and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method, an apparatus, a terminal, and a storage medium.
Background
With the continuous development of the internet, the data volume is larger and larger, and the big data era gradually enters the visual field of people. In many application scenarios, a large amount of data needs to be analyzed, so that a certain data rule is obtained. When the existing scheme analyzes a large amount of data, the mode generally adopted is to directly analyze and process the data, and the mode of directly processing the data easily causes lower efficiency in data processing.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, a terminal and a storage medium, which can improve the efficiency of data processing.
A first aspect of an embodiment of the present application provides a data processing method, where the method includes:
acquiring attribute information of target image data;
adopting the attribute information to perform blocking processing on the target image data to obtain N image data blocks, wherein N is a positive integer;
determining the micro-service corresponding to each image data block in the N image data blocks to obtain M target micro-services, wherein the target micro-services correspond to at least one image data block, and M is a positive integer less than or equal to N;
and sending the N image data blocks to corresponding target micro-services in the M target micro-services.
With reference to the first aspect of the embodiment of the present application, in a first possible implementation manner of the first aspect, the target image data includes a plurality of first target images, the attribute information includes a shooting location, and the obtaining N image data blocks by performing block processing on the target image data using the attribute information includes:
performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N;
and dividing the target image data into N image data blocks according to the second image type.
With reference to the first possible implementation manner of the first aspect of the embodiment of the present application, in a second possible implementation manner of the first aspect, the determining a micro service corresponding to each of the N image data blocks to obtain M target micro services includes:
acquiring a second image type of a first target image in each image data block of the N image data blocks;
determining a reference authority level corresponding to each image data block according to the second image type;
acquiring an image memory value of each image data block;
determining an authority level correction factor of each image data block according to the memory value of each image;
determining a target authority level corresponding to each image data block according to the reference authority level and the authority level correction factor;
and determining the micro-services corresponding to the target authority levels according to the mapping relation between the preset authority levels and the micro-services to obtain M target micro-services.
A second aspect of embodiments of the present application provides a data processing apparatus including an acquisition unit, a blocking unit, a determination unit, and a transmission unit, wherein,
the acquiring unit is used for acquiring attribute information of the target image data;
the blocking unit is used for carrying out blocking processing on the target image data by adopting the attribute information to obtain N image data blocks, wherein N is a positive integer;
the determining unit is configured to determine a micro service corresponding to each of the N image data blocks to obtain M target micro services, where the target micro services correspond to at least one image data block, and M is a positive integer less than or equal to N;
and the sending unit is used for sending the N image data blocks to corresponding target micro-services in the M target micro-services.
With reference to the first aspect of the embodiment of the present application, in a first possible implementation manner of the first aspect, the target image data includes a plurality of first target images, the attribute information includes a shooting location, and in the aspect that the target image data is subjected to blocking processing by using the attribute information to obtain N image data blocks, the blocking unit is specifically configured to:
performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N;
and dividing the target image data into N image data blocks according to the second image type.
With reference to the first possible implementation manner of the second aspect of the embodiment of the present application, in a second possible implementation manner of the second aspect, in the aspect of determining the micro service corresponding to each of the N image data blocks to obtain M target micro services, the determining unit is specifically configured to:
acquiring a second image type of a first target image in each image data block of the N image data blocks;
determining a reference authority level corresponding to each image data block according to the second image type;
acquiring an image memory value of each image data block;
determining an authority level correction factor of each image data block according to the memory value of each image;
determining a target authority level corresponding to each image data block according to the reference authority level and the authority level correction factor;
and determining the micro-services corresponding to the target authority levels according to the mapping relation between the preset authority levels and the micro-services to obtain M target micro-services.
A third aspect of the embodiments of the present application provides a terminal, including a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the step instructions in the first aspect of the embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps as described in the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has at least the following beneficial effects:
by the embodiment of the application, the attribute information of the target image data is obtained, the attribute information is adopted to perform blocking processing on the target image data to obtain N image data blocks, N is a positive integer, micro-services corresponding to each image data block in the N image data blocks are determined to obtain M target micro-services, the target micro-services correspond to at least one image data block, M is a positive integer less than or equal to N, the N image data blocks are sent to the corresponding target micro-services in the M target micro-services, compared with the prior art, the target image data is directly processed, the target data can be divided into a plurality of image data blocks according to the attribute information of the target image data, the target image data blocks are sent to the corresponding target micro-services to be processed, and therefore, the target data is processed after being blocked, efficiency when processing the target data can be promoted to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 provides a schematic diagram of a data processing system according to an embodiment of the present application;
fig. 2A is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2B is a schematic diagram of a vertical centerline of a face image according to an embodiment of the present application;
FIG. 3 is a schematic flow chart diagram illustrating another data processing method according to an embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating another data processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal), and so on. For convenience of description, the above-mentioned apparatuses are collectively referred to as electronic devices.
A microservice may be understood as an application or system that may perform a service alone.
In order to better understand a data processing method provided in the embodiments of the present application, a brief description of a data processing system to which the data processing method is applied is first provided below. Referring to fig. 1, fig. 1 is a schematic diagram of a data processing system according to an embodiment of the present application. As shown in fig. 1, the data processing system 101 includes a data processing device 1011, the data processing system 101 receives target image data, the target image data may include a plurality of first target image data or a plurality of second target image data, then the data processing device 1011 obtains attribute information of the target image data, the attribute information may be a shooting location and a camera identification of a camera shooting the target image data, the data processing device 1011 uses the attribute information to perform block processing on the target image to obtain N image data blocks, determines a micro-service corresponding to each of the N image data blocks to obtain M target micro-services, the target micro-services correspond to at least one image data block, M is a positive integer less than or equal to N, N is a positive integer, the data processing system 101 sends the N image data blocks to corresponding target micro-services in the M target micro-services, compared with the prior art, the target image data is directly processed, the target data can be divided into a plurality of image data blocks according to the attribute information of the target image data, and the target image data blocks are sent to the corresponding target micro-service for processing, so that the efficiency of processing the target data can be improved to a certain extent by processing the target data after being divided into blocks.
Referring to fig. 2A, fig. 2A is a schematic flowchart of an image processing method according to an embodiment of the present disclosure. As shown in fig. 2A, the image processing method includes steps 201 and 204 as follows:
201. attribute information of the target image data is acquired.
Optionally, the attribute information of the target image data may include a shooting location, a camera identifier of a camera shooting the target image data, shooting time, the number of users in the target image data, people flow information carried in the target image data, and the like.
Optionally, before obtaining the attribute information of the target image data, the method may further include obtaining the target image data, where the target image data may include a plurality of target images, and when the plurality of target image data may be that the first user and the plurality of second users are in the same row, the camera captures images captured at a preset time interval, and when the first user and the plurality of second users are in the same row, the first user and the plurality of second users are in the same row in the community, in the same row on the road, and the like. The preset time interval may be set by an empirical value or historical data.
202. And carrying out blocking processing on the target image data by adopting the attribute information to obtain N image data blocks, wherein N is a positive integer.
In one possible example, the target image data includes a plurality of first target images, the attribute information of the target image data may be a shooting location, and the shooting location may be understood as a location where the plurality of first target images are shot, and a possible method for obtaining N image data blocks by performing block processing on the target image data using the attribute information includes steps a1-a4, which are as follows:
a1, performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
optionally, a possible method for performing face recognition on the target image may perform face recognition through a face recognition algorithm, for example, a local feature analysis method, a Gabor wavelet transform and pattern matching method, a multi-template matching method, and the like.
Optionally, when the target face image is occluded, the following method may be adopted for recognition, specifically including steps a100-a109, specifically as follows:
a100, repairing a target face image according to a symmetry principle of a face to obtain a first face image and a target repairing coefficient, wherein the target repairing coefficient is used for expressing the integrity of the face image to the repairing;
the target face image is a face image extracted from the acquired image and only including a part of faces.
A101, performing feature extraction on the first face image to obtain a first face feature set;
a102, performing feature extraction on the target face image to obtain a second face feature set;
a103, searching in the database according to the first facial feature set to obtain facial images of a plurality of objects successfully matched with the first facial feature set;
a104, matching the second face feature set with feature sets of face images of the plurality of objects to obtain a plurality of first matching values;
a105, acquiring human body characteristic data of each object in the plurality of objects to obtain a plurality of human body characteristic data;
a106, matching the human body characteristic data corresponding to the target human face with each of the plurality of human body characteristic data to obtain a plurality of second matching values;
a107, determining a first weight corresponding to the target repair coefficient according to a preset mapping relation between the repair coefficient and the weight, and determining a second weight according to the first weight;
a108, carrying out weighted operation according to the first weight, the second weight, the plurality of first matching values and the plurality of second matching values to obtain a plurality of target matching values;
and A109, selecting a maximum value from the target matching values, and taking an object corresponding to the maximum value as a complete face image corresponding to the target face image.
Optionally, mirror image transformation processing may be performed on the target face image according to a principle of symmetry of the face, after the mirror image transformation processing is performed, face restoration may be performed on the processed target face image based on a model for generating the countermeasure network, so as to obtain a first face image and a target restoration coefficient, where the target restoration coefficient may be a ratio of pixels of a repaired face portion to a total number of pixels of the whole face, and the model for generating the countermeasure network may include the following components: discriminators, semantic regularization networks, and the like, without limitation.
Optionally, the method for extracting features of the first face image may include at least one of: an LBP (local binary Patterns) feature extraction algorithm, an HOG (Histogram of oriented gradients) feature extraction algorithm, a LoG (Laplacian of Gaussian) feature extraction algorithm, and the like, which are not limited herein.
Wherein the mapping relationship between the preset repair coefficients and the weights is such that each preset repair coefficient corresponds to a weight, and the sum of the weights of each preset repair coefficient is 1, the weight of the preset repair coefficient may be set by the user or default by the system, specifically, determining a first weight corresponding to the target repair coefficient according to a mapping relation between a preset repair coefficient and the weight, and determining a second weight value according to the first weight value, wherein the second weight value can be a weight value corresponding to the second matching value, the sum of the first weight value and the second weight value is 1, the first weight value is weighted with a plurality of first matching values respectively, and performing weighted operation on the second weight and the plurality of second matching values respectively to obtain a plurality of target matching values corresponding to the plurality of objects respectively, and selecting the object corresponding to the largest matching value in the plurality of matching values as the complete face image corresponding to the target face image.
In this example, the incomplete face images are repaired, the repaired face images are matched to obtain face images of a plurality of objects, the complete face images corresponding to the target face images are determined by comparing human body characteristics, so that the face images are repaired, the matched images after the repair are screened to obtain the final complete face images, and the number of users can be determined more accurately.
Optionally, another method for determining the number of users in each of the plurality of first target images may be to perform eyeball identification on the first target image to obtain the number of eyeballs; and determining the number of the users according to the number of the eyeballs and the coordinates of each eyeball.
Optionally, the color of the eyeball in the picture is dark, the eyeball may be determined according to a preset gray scale, the preset gray scale may be, for example, a gray scale of more than 80%, and meanwhile, since mole or stain may occur on the face, more than two areas higher than the preset gray scale may be detected, and in this case, the eyeball of the person in the face image may be determined according to the position of the reference eyeball in the face image. The method of determining the eyeball of the person from the position of the reference eyeball may be: determining a vertical central line of the face image, wherein the vertical central line is a central line passing through the chin and the forehead, and as shown in fig. 2B, if the first reference eyeball is completely overlapped or partially overlapped with the second reference eyeball after being symmetrical about the vertical central line, wherein the proportion of the partial overlap is 80% or more, the first reference eyeball and the second reference eyeball are taken as the eyeballs of the person in the face image.
Optionally, the coordinates of the eyeball may be understood as an established coordinate system in which the lower left corner of each first target image is taken as the origin, the straight line where the long side of the image is located is the x-axis, and the straight line where the short side of the image is located is the y-axis. And determining the number of users according to the number of eyeballs and the eyeball coordinates, determining the users as one user if no eyeballs symmetrical to the target eyeballs exist in the graph, and determining one user through two eyeballs if the eyeballs exist so as to determine the number of the users. For example, the number of eyeballs is 5, two eyeballs are symmetrical eyeballs, and the remaining three eyeballs do not have an eyeball symmetrical thereto, so the number of users is 4.
A2, dividing the multiple first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
optionally, a possible method of dividing the plurality of first target images into a first image types according to the number of users in each first target image is as follows: the dividing may be performed according to the number of users in different image types, for example, the number of users of the face image included in the first image type is 3, the number of face images included in the second first image type is 7, and the like, and the specific dividing manner may be that the calculation overhead used in face recognition is divided, and the larger the system overhead is, the larger the number of corresponding users is, the smaller the calculation overhead is, and the smaller the number of corresponding users is. The larger the number of users, the more the amount of data required to perform feature extraction increases, and the smaller the number of users, the less the amount of data required to perform feature extraction.
A3, determining a second image type of each first target image in the A first image types according to the shooting position of each first target image in the A first image types, wherein the number of the second image types is N;
alternatively, the method for determining the second image type according to the shooting location may be: selecting a first reference first target image from any one first image type, wherein the first reference first target image is any one of the first image types; comparing the shooting location of the remaining first target image data with the shooting location of the first reference first target image, and if the distance between the shooting location and the shooting location of the first reference first target image is less than a preset distance threshold, classifying the first target image data into the same category as the first reference first target image; and if the distance between the shooting place of the second reference first target image and the shooting place of the first reference first target image is larger than a preset distance threshold, taking the second reference first target image data as a new category, and determining the images of the same category as the second reference first target image data by the method until the second image types of all the first target images are determined to obtain N second image types. Wherein the preset distance threshold is set by empirical values or historical data.
A4, dividing the target image data into N image data blocks according to the second image type.
Wherein the first target image in each second image type is taken as one image data block, thereby dividing the target image data into N image data blocks.
In another possible example, the target image data includes a plurality of second target images, the attribute information includes a camera identifier of a camera that captures the plurality of second target images, and one possible method that uses the attribute information to perform blocking processing on the target image data to obtain N image data blocks includes steps B1-B2, which are specifically as follows:
b1, determining a category corresponding to the camera identification of the cameras for shooting the plurality of second target images by adopting a preset algorithm to obtain N camera categories;
the preset algorithm includes a load balancing algorithm for uniformly distributing the camera identifications to different categories, and may also be a hash algorithm, and the camera identifications can be divided into N categories by the hash algorithm. The number of the second target images shot by the camera corresponding to the camera identification is large, the camera is classified into one category, and the number of the second target images shot by the camera corresponding to the camera identification is small, so that the camera is classified into one category. The classification may also be performed according to the number of segments, for example, the number is 0-10 for one category, 11-15 for another category, and so on.
And B2, performing blocking processing on the target image data through the N camera categories to obtain the N image data blocks.
Optionally, a possible method for performing block processing on target image data through N camera categories to obtain N image data blocks includes steps B21-B23, which are specifically as follows:
b21, extracting a camera identification in each camera category of the N camera categories;
b22, taking the second target image shot by the camera corresponding to the camera identification in each camera category as one category to obtain N image categories;
the method may be understood that one camera category includes a plurality of camera identifiers, the cameras corresponding to the plurality of camera identifiers capture a plurality of images, and the plurality of images captured by the cameras corresponding to all the camera identifiers are used as one image category, so as to obtain N image categories.
And B23, performing blocking processing on the target image data according to the N image categories to obtain N image data blocks.
The target image data is divided according to N image categories to obtain N image sets, each set is provided with a plurality of second target images, and the second target images in each image set are used as one image data block to obtain N image data blocks.
203. And determining the micro-service corresponding to each image data block in the N image data blocks to obtain M target micro-services, wherein the target micro-services correspond to at least one image data block, and M is a positive integer less than or equal to N.
Optionally, a possible method for determining the micro-service corresponding to each image data block of the N image data blocks to obtain M target micro-services includes steps C1-C6, which are specifically as follows:
c1, acquiring a second image type of the first target image in each image data block of the N image data blocks;
since the second image data block is determined by the second image type, the second image type of the first target data can be directly acquired.
C2, determining a reference authority level corresponding to each image data block according to the second image type;
optionally, a possible permission level corresponding to each second image type is determined according to a preset mapping relationship between the image types and the permission levels, and the permission level corresponding to the second image type is used as a reference permission level corresponding to the image data block. The mapping relation between the image type and the authority level can be obtained through a neural network model, and one possible method for training the neural network model is as follows: the training of the neural network model can comprise forward training and reverse training, the neural network model can comprise N layers of neural networks, during training, sample data can be input into a first layer of the N layers of neural networks, a first operation result is obtained after forward operation is carried out on the first layer, then the first operation result is input into a second layer for forward operation, a second result is obtained, therefore, the forward training and the reverse training are repeatedly executed until the training of the neural network model is completed, and the mark of the training completion can be a loss value converged to a certain fixed interval. The sample data is the image type and the authority level.
C3, obtaining the image memory value of each image data block;
the image memory value may be understood as a memory space required for storing each image data block.
C4, determining the authority level correction factor of each image data block according to each image memory value;
alternatively, the permission level modifier may be any value between 0 and 2, such as 0.1,0.5,1.6, etc. Specifically, the higher the memory value of the image is, the larger the corresponding authority level correction factor is, and the lower the memory value is, the smaller the corresponding authority level correction factor is. The memory value and the correction factor may be in a proportional relationship, but may also be in other proportional relationships.
C5, determining a target authority level corresponding to each image data block according to the reference authority level and the authority level correction factor;
optionally, the reference permission level is multiplied by the permission level correction factor to obtain a product result, the product result is used as a target permission level corresponding to each image data block, and if the product result is a decimal, the decimal is rounded to obtain the target permission level.
And C6, determining the micro services corresponding to the target authority levels according to the mapping relation between the preset authority levels and the micro services, and obtaining M target micro services.
Optionally, the higher the authority level is, the higher the computation capability of the corresponding micro service is, the lower the authority level is, the lower the computation capability of the corresponding micro service is, and the mapping relationship between the preset authority level and the micro service may be obtained by a neural network model, where the training process of the neural network model refers to the step shown in step C2.
204. And sending the N image data blocks to corresponding target micro-services in the M target micro-services.
Optionally, the N image data blocks are sent to between corresponding target microservices in the M target microservices, and a secure communication channel may be established between the data processing system and the target microservices. A possible method for establishing a secure communication channel relates to a data processing system, a target micro-service and a proxy device, wherein the proxy device is a trusted third-party device, and specifically comprises the following steps:
d1, initialization: the initialization stage mainly completes the registration of the data processing system and the target micro-service on the proxy equipment, the subscription of the theme and the generation of system parameters. The data processing system and the target micro service register with the agent equipment, the publishing and the subscribing of the theme can be participated only through the registered data processing system and the registered target micro service, and the target micro service subscribes the related theme to the agent equipment. The proxy device generates a system public Parameter (PK) and a master key (MSK) and sends the PK to the registered data processing system and the target microservice.
D2, encryption and distribution: the encryption and release stage is mainly that the data processing system encrypts the load corresponding to the subject to be released and sends the load to the agent equipment. Firstly, a data processing system encrypts a load by adopting a symmetric encryption algorithm to generate a Ciphertext (CT), and then an access structure is formulated
Figure BDA0001928594310000121
PK and generated from data processing system
Figure BDA0001928594310000122
And encrypting the symmetric key, and finally sending the encrypted key and the encrypted load to the proxy equipment. And after receiving the encrypted key and the encrypted CT sent by the data processing system, the proxy equipment filters and forwards the key and the CT to the target micro service.
Optionally, an access structure
Figure BDA0001928594310000123
Is an access tree structure. Each non-leaf node of the access tree is a threshold, denoted by KxIs represented by 0<=Kx<Num (x), num (x) indicates the number of child nodes. When K isxNum (x), the non-leaf node represents the and gate; when K isxWhen 1, the non-leaf node represents an or gate; each leaf node of the access tree represents an attribute. The attribute set satisfying an access tree structure can be defined as: let T be an access tree with r as the root node, TxIs a subtree of T with x as the root node. If T isx(S) < 1 > indicates that the attribute set S satisfies the access structure Tx. If node x is a leaf node, T is a set of attributes S if and only if the attribute att (x) associated with leaf node x is an element of attribute set Sx(S) ═ 1. If node x is a non-leaf node, at least KxChild node z satisfies TzWhen (S) is 1, Tx(S)=1。
D3, private key generation: the private key generation phase is mainly that the agent device generates a corresponding secret key for the target micro service, and the secret key is used for decrypting the CT received thereafter. Target microservice provides attribute set A to proxy devicei(the attribute can be the information of the characteristics, roles and the like of the subscriber), the proxy device collects A according to PK and attributeiAnd the master key MSK generates a private key SK and then sends the generated private key to the target microservice.
Optionally, attribute set AiIs a global set of U ═ A1,A2,…,AnA subset of. Attribute set AiThe attribute information indicating the target micro-service i (i-th target micro-service) may be the characteristics, roles, etc. of the target micro-service, and is the default attribute of the target micro-service, and the global set U indicates all the targetsA set of service attribute information is marked micro.
D4, decryption: the decryption stage is mainly a process of decrypting the encrypted load by the target micro service to extract the civilization. And after receiving the encrypted key and the CT sent by the proxy equipment, the target micro service decrypts the encrypted key according to the PK and the SK to obtain a symmetric key. If its attribute set AiAccess structure satisfying ciphertext
Figure BDA0001928594310000131
The ciphertext can be successfully decrypted, so that the safety of the communication process is guaranteed.
By constructing the secure communication channel, the security of communication between the target micro-service and the data processing system can be ensured to a certain extent, the possibility that the illegal target micro-service steals data transmitted between the legal target micro-service and the data processing system is reduced, and the occurrence of the situation that the illegal target micro-service steals important data in the system by invading the system and tampering the system is also reduced.
Referring to fig. 3, fig. 3 is a schematic flow chart of another data processing method according to an embodiment of the present application. As shown in fig. 3, the data processing method may include steps 301 and 307 as follows:
301. acquiring attribute information of target image data;
the target image data includes a plurality of first target images, and the attribute information includes a shooting location.
302. Performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
303. dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
304. determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N, and N is a positive integer;
305. dividing the target image data into N image data blocks according to the second image type;
306. determining the micro-service corresponding to each image data block in the N image data blocks to obtain M target micro-services, wherein the target micro-services correspond to at least one image data block, and M is a positive integer less than or equal to N;
307. and sending the N image data blocks to corresponding target micro-services in the M target micro-services.
In this example, the first image data is classified by the number of users in the first target image data in the target image to obtain a first image type, and then the first image data is classified again at the shooting location according to the number of the first target image data in the target image to obtain a second image type, and finally the target image data is classified according to the second image type to obtain N image data blocks.
Referring to fig. 4, fig. 4 is a schematic flow chart illustrating another data processing method according to an embodiment of the present disclosure. As shown in fig. 4, the data processing method may include steps 401 and 409, which are as follows:
401. acquiring attribute information of target image data;
the target image data includes a plurality of first target images, and the attribute information includes a shooting location.
402. Adopting the attribute information to perform blocking processing on the target image data to obtain N image data blocks, wherein N is a positive integer;
the obtaining of N image data blocks by blocking the target image data with the attribute information includes: performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images; dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer; determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N; and dividing the target image data into N image data blocks according to the second image type.
403. Acquiring a second image type of a first target image in each image data block of the N image data blocks;
404. determining a reference authority level corresponding to each image data block according to the second image type;
405. acquiring an image memory value of each image data block;
the image memory value may be understood as a memory space required for storing each image data block. The image memory value of each image data block may be directly acquired from a memory address where each image data block is stored.
406. Determining an authority level correction factor of each image data block according to the memory value of each image;
wherein, the authority level correction factor can be any value between 0 and 2, such as 0.1,0.5,1.6, etc. Specifically, the higher the memory value of the image is, the larger the corresponding authority level correction factor is, and the lower the memory value is, the smaller the corresponding authority level correction factor is. The memory value and the correction factor may be in a proportional relationship, but may also be in other proportional relationships.
407. Determining a target authority level corresponding to each image data block according to the reference authority level and the authority level correction factor;
and if the multiplication result is a decimal number, rounding the decimal number, thereby obtaining the target authority level.
408. Determining the micro-services corresponding to the target authority levels according to a mapping relation between preset authority levels and the micro-services to obtain M target micro-services;
wherein the target microservice corresponds to at least one image data block, and M is a positive integer less than or equal to N.
Wherein, the higher the authority level is, the higher the computation capability of the corresponding micro service is, the lower the authority level is, the lower the computation capability of the corresponding micro service is, and the mapping relationship between the preset authority level and the micro service can be obtained by the neural network model, and the training process of the neural network model refers to the step shown in step C2.
409. And sending the N image data blocks to corresponding target micro-services in the M target micro-services.
In this example, the reference permission level of each data block is determined for the first time according to the second image type, the correction factor is determined according to the memory value of the data block, the target permission level is obtained according to the correction factor and the reference permission level, and the target micro service is finally determined, so that the accuracy of determining the target micro service can be improved to a certain extent.
In accordance with the foregoing embodiments, please refer to fig. 5, fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application, and as shown in the drawing, the terminal includes a processor, an input device, an output device, and a memory, where the processor, the input device, the output device, and the memory are connected to each other, where the memory is used to store a computer program, the computer program includes program instructions, the processor is configured to call the program instructions, and the program includes instructions for performing the following steps;
acquiring attribute information of target image data;
adopting the attribute information to perform blocking processing on the target image data to obtain N image data blocks, wherein N is a positive integer;
determining the micro-service corresponding to each image data block in the N image data blocks to obtain M target micro-services, wherein the target micro-services correspond to at least one image data block, and M is a positive integer less than or equal to N;
and sending the N image data blocks to corresponding target micro-services in the M target micro-services.
In this example, the attribute information of the target image data is obtained, the target image data is processed in blocks by using the attribute information, N image data blocks are obtained, N is a positive integer, the micro service corresponding to each image data block in the N image data blocks is determined, M target micro services are obtained, the target micro service corresponds to at least one image data block, M is a positive integer less than or equal to N, the N image data blocks are sent to the corresponding target micro service in the M target micro services, compared with the prior art, the target image data is directly processed, the target data can be divided into a plurality of image data blocks according to the attribute information of the target image data, the target image data blocks are sent to the corresponding target micro service for processing, therefore, the target data is processed after being processed in blocks, efficiency when processing the target data can be promoted to a certain extent.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the above-described functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the terminal may be divided into the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In accordance with the above, referring to fig. 6, fig. 6 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application, where the apparatus includes an obtaining unit 601, a partitioning unit 602, a determining unit 603, and a sending unit 604,
the acquiring unit 601 is configured to acquire attribute information of target image data;
the blocking unit 602 is configured to perform blocking processing on the target image data by using the attribute information to obtain N image data blocks, where N is a positive integer;
the determining unit 603 is configured to determine a micro service corresponding to each of the N image data blocks, to obtain M target micro services, where the target micro services correspond to at least one image data block, and M is a positive integer smaller than or equal to N;
the sending unit 604 is configured to send the N image data blocks to corresponding target micro services in the M target micro services.
Optionally, the target image data includes a plurality of first target images, the attribute information includes a shooting location, and in the aspect of obtaining N image data blocks by performing blocking processing on the target image data by using the attribute information, the blocking unit 602 is specifically configured to:
performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N;
and dividing the target image data into N image data blocks according to the second image type.
Optionally, the target image data includes a plurality of second target images, the attribute information includes a camera identifier of a camera that captures the plurality of second target images, and in the aspect of obtaining N image data blocks by performing blocking processing on the target image data by using the attribute information, the blocking unit 602 is specifically configured to:
determining a category corresponding to the camera identification of the cameras for shooting the plurality of second target images by adopting a preset algorithm to obtain N camera categories;
and carrying out blocking processing on the target image data through the N camera categories to obtain the N image data blocks.
Optionally, in the aspect that the target image data is blocked by the N camera categories to obtain the N image data blocks, the blocking unit 602 is specifically configured to:
extracting a camera identification in each camera category of the N camera categories;
taking a second target image shot by a camera corresponding to the camera identification in each camera category as one category to obtain N image categories;
and carrying out blocking processing on the target image data according to the N image categories to obtain N image data blocks.
Optionally, in the aspect of determining the micro service corresponding to each image data block of the N image data blocks to obtain M target micro services, the determining unit 603 is specifically configured to:
acquiring a second image type of a first target image in each image data block of the N image data blocks;
determining a reference authority level corresponding to each image data block according to the second image type;
acquiring an image memory value of each image data block;
determining an authority level correction factor of each image data block according to the memory value of each image;
determining a target authority level corresponding to each image data block according to the reference authority level and the authority level correction factor;
and determining the micro-services corresponding to the target authority levels according to the mapping relation between the preset authority levels and the micro-services to obtain M target micro-services.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the data processing methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program causes a computer to execute part or all of the steps of any one of the data processing methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and the like.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash memory disks, read-only memory, random access memory, magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of data processing, the method comprising:
acquiring attribute information of target image data;
adopting the attribute information to perform blocking processing on the target image data to obtain N image data blocks, wherein N is a positive integer;
determining the micro-service corresponding to each image data block in the N image data blocks to obtain M target micro-services, wherein the target micro-services correspond to at least one image data block, and M is a positive integer less than or equal to N;
and sending the N image data blocks to corresponding target micro-services in the M target micro-services.
2. The method according to claim 1, wherein the target image data includes a plurality of first target images, the attribute information includes a shooting location, and the obtaining N image data blocks by performing block processing on the target image data using the attribute information includes:
performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N;
and dividing the target image data into N image data blocks according to the second image type.
3. The method according to claim 1, wherein the target image data includes a plurality of second target images, the attribute information includes a camera id of a camera that captures the plurality of second target images, and the obtaining N image data blocks by performing block processing on the target image data using the attribute information includes:
determining a category corresponding to the camera identification of the cameras for shooting the plurality of second target images by adopting a preset algorithm to obtain N camera categories;
and carrying out blocking processing on the target image data through the N camera categories to obtain the N image data blocks.
4. The method of claim 3, wherein the blocking the target image data by the N camera categories to obtain the N image data blocks comprises:
extracting a camera identification in each camera category of the N camera categories;
taking a second target image shot by a camera corresponding to the camera identification in each camera category as one category to obtain N image categories;
and carrying out blocking processing on the target image data according to the N image categories to obtain N image data blocks.
5. The method of claim 2, wherein the determining the microservice corresponding to each of the N image data blocks, resulting in M target microservices, comprises:
acquiring a second image type of a first target image in each image data block of the N image data blocks;
determining a reference authority level corresponding to each image data block according to the second image type;
acquiring an image memory value of each image data block;
determining an authority level correction factor of each image data block according to the memory value of each image;
determining a target authority level corresponding to each image data block according to the reference authority level and the authority level correction factor;
and determining the micro-services corresponding to the target authority levels according to the mapping relation between the preset authority levels and the micro-services to obtain M target micro-services.
6. A data processing apparatus comprising an acquisition unit, a blocking unit, a determination unit, and a transmission unit, wherein,
the acquiring unit is used for acquiring attribute information of the target image data;
the blocking unit is used for carrying out blocking processing on the target image data by adopting the attribute information to obtain N image data blocks, wherein N is a positive integer;
the determining unit is configured to determine a micro service corresponding to each of the N image data blocks to obtain M target micro services, where the target micro services correspond to at least one image data block, and M is a positive integer less than or equal to N;
and the sending unit is used for sending the N image data blocks to corresponding target micro-services in the M target micro-services.
7. The apparatus according to claim 6, wherein the target image data includes a plurality of first target images, the attribute information includes a shooting location, and in the aspect that the target image data is subjected to the blocking processing by using the attribute information to obtain N image data blocks, the blocking unit is specifically configured to:
performing face recognition on the multiple first target images to obtain the number of users in each first target image in the multiple first target images;
dividing the plurality of first target images into A first image types according to the number of users in each first target image, wherein A is a positive integer;
determining a second image type of each first target image in the A first image types according to the shooting location of each first target image in the A first image types, wherein the number of the second image types is N;
and dividing the target image data into N image data blocks according to the second image type.
8. The apparatus according to claim 6, wherein the target image data includes a plurality of second target images, the attribute information includes a camera id of a camera that captures the plurality of second target images, and in the aspect of obtaining N image data blocks by performing blocking processing on the target image data using the attribute information, the blocking unit is specifically configured to:
determining a category corresponding to the camera identification of the cameras for shooting the plurality of second target images by adopting a preset algorithm to obtain N camera categories;
and carrying out blocking processing on the target image data through the N camera categories to obtain the N image data blocks.
9. A terminal, comprising a processor, an input device, an output device, and a memory, the processor, the input device, the output device, and the memory being interconnected, wherein the memory is configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the method according to any of claims 1-5.
CN201811629084.XA 2018-12-28 2018-12-28 Data processing method, device, terminal and storage medium Active CN111382296B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811629084.XA CN111382296B (en) 2018-12-28 2018-12-28 Data processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811629084.XA CN111382296B (en) 2018-12-28 2018-12-28 Data processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111382296A true CN111382296A (en) 2020-07-07
CN111382296B CN111382296B (en) 2023-05-12

Family

ID=71220940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811629084.XA Active CN111382296B (en) 2018-12-28 2018-12-28 Data processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111382296B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159091A (en) * 2021-01-20 2021-07-23 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2154630A1 (en) * 2008-08-13 2010-02-17 NTT DoCoMo, Inc. Image identification method and imaging apparatus
US20130101223A1 (en) * 2011-04-25 2013-04-25 Ryouichi Kawanishi Image processing device
CN105637343A (en) * 2014-01-20 2016-06-01 富士施乐株式会社 Detection control device, program, detection system, storage medium and detection control method
CN108229515A (en) * 2016-12-29 2018-06-29 北京市商汤科技开发有限公司 Object classification method and device, the electronic equipment of high spectrum image
CN108898171A (en) * 2018-06-20 2018-11-27 深圳市易成自动驾驶技术有限公司 Recognition processing method, system and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2154630A1 (en) * 2008-08-13 2010-02-17 NTT DoCoMo, Inc. Image identification method and imaging apparatus
US20130101223A1 (en) * 2011-04-25 2013-04-25 Ryouichi Kawanishi Image processing device
CN105637343A (en) * 2014-01-20 2016-06-01 富士施乐株式会社 Detection control device, program, detection system, storage medium and detection control method
CN108229515A (en) * 2016-12-29 2018-06-29 北京市商汤科技开发有限公司 Object classification method and device, the electronic equipment of high spectrum image
CN108898171A (en) * 2018-06-20 2018-11-27 深圳市易成自动驾驶技术有限公司 Recognition processing method, system and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159091A (en) * 2021-01-20 2021-07-23 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium
US11822568B2 (en) 2021-01-20 2023-11-21 Beijing Baidu Netcom Science Technology Co., Ltd. Data processing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111382296B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
KR102045978B1 (en) Facial authentication method, device and computer storage
CN111340008B (en) Method and system for generation of counterpatch, training of detection model and defense of counterpatch
CN106530200B (en) Steganographic image detection method and system based on deep learning model
Sajjad et al. Robust image hashing based efficient authentication for smart industrial environment
CN111507386B (en) Method and system for detecting encryption communication of storage file and network data stream
TW202026984A (en) User identity verification method, device and system
US8712047B2 (en) Visual universal decryption apparatus and methods
TWI675308B (en) Method and apparatus for verifying the availability of biometric images
CN104636764B (en) A kind of image latent writing analysis method and its device
CN111340247A (en) Longitudinal federated learning system optimization method, device and readable storage medium
CN112487365B (en) Information steganography method and information detection method and device
CN113766085B (en) Image processing method and related device
Vega et al. Image tampering detection by estimating interpolation patterns
US20240119714A1 (en) Image recognition model training method and apparatus
CN106599841A (en) Full face matching-based identity verifying method and device
CN109856979B (en) Environment adjusting method, system, terminal and medium
CN116383793B (en) Face data processing method, device, electronic equipment and computer readable medium
CN111382296B (en) Data processing method, device, terminal and storage medium
KR101752659B1 (en) Image key certification method and system
CN113689321A (en) Image information transmission method and device based on stereoscopic projection encryption
Zhong et al. Steganographer detection via multi-scale embedding probability estimation
CN111382286B (en) Data processing method and related product
CN112702623A (en) Video processing method, device, equipment and storage medium
Amerini et al. Acquisition source identification through a blind image classification
CN110505285B (en) Park session method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant