CN111062248A - Image detection method, device, electronic equipment and medium - Google Patents

Image detection method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111062248A
CN111062248A CN201911085420.3A CN201911085420A CN111062248A CN 111062248 A CN111062248 A CN 111062248A CN 201911085420 A CN201911085420 A CN 201911085420A CN 111062248 A CN111062248 A CN 111062248A
Authority
CN
China
Prior art keywords
ear
image
result
feature
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911085420.3A
Other languages
Chinese (zh)
Inventor
彭瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201911085420.3A priority Critical patent/CN111062248A/en
Publication of CN111062248A publication Critical patent/CN111062248A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for image detection, electronic equipment and a medium. According to the method and the device, after a detection instruction for a target user is received, the ear image of the target user is obtained, feature recognition is carried out on the ear image based on a preset neural network detection model to obtain a recognition result, and then the detection result of the target user is determined based on the recognition result. By the technical scheme, when the detection instruction for the target user is acquired, the ear image of the user can be acquired, and the user can be identified based on the ear image. And then the defect of low identification accuracy caused by identifying other face organs of the user in the related technology can be avoided.

Description

Image detection method, device, electronic equipment and medium
Technical Field
The present application relates to data processing technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for image detection.
Background
Due to the rise of the communications era and society, smart devices have been continuously developed with the use of more and more users.
Further, with the rapid development of the internet, in order to enable a user to have better use experience, multiple functions are often deployed in the smart device. Such as payment functions, verification functions, etc. In order to ensure the safety of various functions, identity recognition is a problem frequently encountered in modern society, and for example, identity recognition is required in places needing safety inspection, such as banks, public security, online shopping, markets, living communities and the like. The identification can utilize biological characteristic identification technology to obtain human specific Physiological (Physiological) or Behavior (Behavior) characteristics for automatic identification and verification. Generally, identification is performed by acquiring a face image of a user, and it is determined whether the user is a legitimate user.
However, the identity recognition method in the related art has the problem of low recognition accuracy, thereby reducing the data security assurance.
Disclosure of Invention
The embodiment of the application provides an image detection method, an image detection device, electronic equipment and a medium.
According to an aspect of an embodiment of the present application, there is provided an image detection method, including:
when a detection instruction for a target user is received, acquiring an ear image of the target user;
performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result;
and determining the detection result of the target user based on the identification result.
Optionally, in another embodiment based on the foregoing method of the present application, after the obtaining the ear image of the target user when receiving the detection instruction for the target user, the method further includes:
carrying out gray level correction on the ear image to obtain an ear image to be filtered;
carrying out noise filtering on the ear image to be filtered to obtain a target ear image;
and carrying out feature recognition on the target ear image based on a preset neural network detection model to obtain a recognition result.
Optionally, in another embodiment based on the foregoing method of the present application, the performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result includes:
based on the neural network detection model, carrying out feature recognition on the ear image to obtain a first feature recognition result corresponding to the ear area;
and/or the presence of a gas in the gas,
based on the neural network detection model, carrying out feature recognition on the ear image to obtain a second feature recognition result corresponding to the ear shape;
and/or the presence of a gas in the gas,
and based on the neural network detection model, carrying out feature recognition on the ear image to obtain a third feature recognition result corresponding to the ear fold degree.
Optionally, in another embodiment based on the foregoing method of the present application, the determining a detection result of the target user based on the identification result includes:
matching the recognition result with each feature data in a feature database one by one to obtain a matching result;
and determining the detection result of the target user based on the matching result.
Optionally, in another embodiment based on the foregoing method of the present application, the performing one-to-one matching on the recognition result and each feature data in a feature database to obtain a matching result includes:
determining an identification type corresponding to the identification result, wherein the identification type is used for representing that the identification result corresponds to the ear area size, and/or the ear shape, and/or the ear wrinkle degree;
and matching the recognition result with each feature data in a feature database one by one based on the recognition type to obtain the matching result.
Optionally, in another embodiment based on the foregoing method of the present application, the performing one-to-one matching on the recognition result and each feature data in a feature database to obtain a matching result includes:
determining ear information corresponding to the identification result, wherein the ear information is used for representing that the ear image is a left ear image and/or a right ear image;
and matching the identification result with each feature data in a feature database one by one based on the ear information to obtain the matching result.
Optionally, in another embodiment based on the foregoing method of the present application, before the performing feature recognition on the ear image based on a preset neural network detection model, the method further includes:
obtaining a sample image, wherein the sample image comprises at least one ear feature information;
and training a preset image semantic segmentation model by using the sample image to obtain the neural network detection model meeting preset conditions.
According to another aspect of the embodiments of the present application, there is provided an apparatus for image detection, including:
the device comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring an ear image of a target user when receiving a detection instruction for the target user;
the generating module is set to perform feature recognition on the ear images based on a preset neural network detection model to obtain recognition results;
a determination module configured to determine a detection result of the target user based on the recognition result.
According to another aspect of the embodiments of the present application, there is provided an electronic device including:
a memory for storing executable instructions; and
a display for displaying with the memory to execute the executable instructions to perform the operations of any of the image detection methods described above.
According to a further aspect of the embodiments of the present application, there is provided a computer-readable storage medium for storing computer-readable instructions, which when executed, perform the operations of any one of the image detection methods described above.
In the method and the device, after a detection instruction for a target user is received, the ear image of the target user is obtained, feature recognition is carried out on the ear image based on a preset neural network detection model to obtain a recognition result, and then the detection result of the target user is determined based on the recognition result. By the technical scheme, when the detection instruction for the target user is acquired, the ear image of the user can be acquired, and the user can be identified based on the ear image. And then the defect of low identification accuracy caused by identifying other face organs of the user in the related technology can be avoided.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
The present application may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a system architecture for video inspection according to the present application;
FIG. 2 is a schematic diagram of an image detection method proposed in the present application;
3 a-3 b are schematic views of ear images proposed by the present application;
FIG. 4 is a schematic structural diagram of an apparatus for image inspection according to the present application;
fig. 5 is a schematic view of an electronic device according to the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
In addition, technical solutions between the various embodiments of the present application may be combined with each other, but it must be based on the realization of the technical solutions by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not within the protection scope of the present application.
It should be noted that all the directional indicators (such as upper, lower, left, right, front and rear … …) in the embodiment of the present application are only used to explain the relative position relationship between the components, the motion situation, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicator is changed accordingly.
A method for performing image detection according to an exemplary embodiment of the present application is described below with reference to fig. 1 to 3. It should be noted that the following application scenarios are merely illustrated for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect. Rather, embodiments of the present application may be applied to any scenario where applicable.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which a video processing method or a video processing apparatus of an embodiment of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
The terminal apparatuses 101, 102, 103 in the present application may be terminal apparatuses that provide various services. For example, the user realizes via the terminal device 103 (which may also be the terminal device 101 or 102): when a detection instruction for a target user is received, acquiring an ear image of the target user; performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result; and determining the detection result of the target user based on the identification result.
It should be noted that the video processing method provided in the embodiments of the present application may be executed by one or more of the terminal devices 101, 102, and 103, and/or the server 105, and accordingly, the video processing apparatus provided in the embodiments of the present application is generally disposed in the corresponding terminal device, and/or the server 105, but the present application is not limited thereto.
The application also provides an image detection method, an image detection device, a target terminal and a medium.
Fig. 2 schematically shows a flow chart of a method of image detection according to an embodiment of the present application. As shown in fig. 2, the method includes:
s101, when a detection instruction for a target user is received, acquiring an ear image of the target user.
It should be noted that, in the present application, the device for receiving the detection instruction is not specifically limited, and may be, for example, an intelligent device or a server. The smart device may be a PC (Personal Computer), a smart phone, a tablet PC, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer iii) image detector, an MP4(Moving Picture Experts Group Audio Layer IV) image detector, a portable Computer, and other mobile terminal devices having a display function.
It should be noted that the technical solution of the present application can be applied to a technical solution for identifying a user by using a biometric technology of the user. Furthermore, compared with the traditional identity identification technology such as an identity card, an IC card, an account password and the like, the biological feature identification technology has great advantages because the biological features of the human body are always followed by the human body and cannot be lost; in addition, the biological characteristics of the human body are complex and difficult to imitate, so that the reliability and the safety are improved. These obvious advantages will make the biometric authentication method the mainstream way of future authentication. Biometric features currently being studied and used include fingerprints, faces, ears, irises, retinas, palms, gestures, palmprints, voice prints, odors, signatures, keystroke habits, gait, and the like.
Generally, in the related art, in the process of performing identity recognition by using biometric information of a user, detection is performed by selecting a mode of acquiring an iris image, a face image, a fingerprint, and the like of the user. The difficulty of face recognition is mainly caused by the characteristics of the face as biological characteristics, and mainly caused by similarity and variability. The existing face-lifting phenomenon of people is common, the face shape can be changed after face lifting, the face recognition fails, and even people can easily unlock the face-lifting device successfully due to similarity of the face and the face; shaving the beard, changing the hairstyle, adding more glasses, and changing the expression may cause the comparison to fail. Moreover, if the user wears a mask or other ornaments for shielding the face, the ornaments need to be taken off for identification, which causes inconvenience to the user. Further, for fingerprint unlocking, a fingerprint recognition button is required, which also causes inconvenience in unlocking when the user has water in his hand or has something in his hand. In addition, iris recognition requires the user to remove contact lenses and sunglasses, and even may require to remove ordinary spectacles, which may affect user experience.
In this application, when receiving the detection instruction to the target user, can acquire target user's ear image. The present application does not specifically limit the ear image, and may be, for example, a whole ear image or a partial ear image.
In addition, the type of the ear image is not specifically limited in the present application, and may be, for example, a left ear image or a right ear image of the target user. But also ear images of both ears of the user, etc.
And S102, carrying out feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result.
Further, after the ear image of the target user is obtained, feature recognition can be performed on the ear image based on a preset neural network detection model, and a recognition result is obtained. It can be understood that the feature identification type of the ear image is not specifically limited in the present application, that is, any feature of the ear image can be identified.
The neural network detection model is not specifically limited in the present application. For example, a Convolutional Neural Network (CNN). It is a kind of feed forward Neural Networks (feed forward Neural Networks) containing convolution calculation and having a deep structure, and is one of the representative algorithms of deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The CNN (convolutional neural network) has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like due to the powerful feature characterization capability of the CNN on the image.
Further, the feature information of the ear image of the detection target user in the CNN neural network model can be used, and a corresponding recognition result is obtained. The ear images need to be input into a preset convolutional neural network model, and the output of the last full connected layer (FC) of the convolutional neural network model is used as a feature data identification result corresponding to the ear images.
S103, determining the detection result of the target user based on the identification result.
It can be understood that, in the present application, based on the neural network detection model, the identification result of the feature recognition on the ear image may be determined, and the detection result of whether the identity recognition of the target user passes or not may be determined. The method for determining the detection result of the target user according to the identification result is not specifically limited in the present application, and for example, the identification result may be input into a pre-generated feature database containing legitimate users to obtain a corresponding detection result.
In the method and the device, after a detection instruction for a target user is received, the ear image of the target user is obtained, feature recognition is carried out on the ear image based on a preset neural network detection model to obtain a recognition result, and then the detection result of the target user is determined based on the recognition result. By the technical scheme, when the detection instruction for the target user is acquired, the ear image of the user can be acquired, and the user can be identified based on the ear image. And then the defect of low identification accuracy caused by identifying other face organs of the user in the related technology can be avoided.
Optionally, in another embodiment of the present application, after S101 (acquiring an ear image of the target user when receiving a detection instruction for the target user), the method may further include the following steps:
carrying out gray level correction on the ear image to obtain an ear image to be filtered;
carrying out noise filtering on the ear image to be filtered to obtain a target ear image;
and carrying out feature recognition on the target ear image based on a preset neural network detection model to obtain a recognition result.
Further, after the ear images of the target user are acquired, the accuracy of feature recognition on the images is guaranteed. The application needs to perform gray correction to ensure that the gray value of the image is within a normal range. It should be noted that, the method for obtaining the ear image to be filtered by performing gray level correction on the ear image is not specifically limited in the present application. For example, any one or more of the following three ways may be included:
gray level correction method:
aiming at image imaging unevenness such as exposure unevenness, the half dark and the half light of an image are made, and gray level correction of different degrees is carried out on the image point by point, so that the gray level of the whole image is uniform.
Gradation conversion correction method:
gray scale transformations are used for underexposure of a certain part of an image or the whole image, with the aim of enhancing the gray scale contrast of the image.
Histogram correction method:
the image can be provided with the required gray scale distribution, so that the required image characteristics can be selectively highlighted to meet the requirements of users.
Furthermore, the accuracy of feature recognition on the ear images is ensured. This application carries out grey level correction to ear's image, obtains to wait to filter ear's image after, can also further carry out noise filtering to this ear's image of waiting to filter to get rid of the image noise in the ear's image of waiting to filter.
The image noise is interference of random signals in the process of obtaining or transmitting an image, and some random, discrete and isolated pixel points appear on the image and can interfere analysis of image information by human eyes. The noise of an image is usually relatively complex, which is often considered to be a multidimensional random process, and can therefore be described by means of a random process, i.e. using a probability distribution function and a probability density function.
Still further, the present application may choose to utilize a corresponding filter to remove noise. Wherein, different filters can be selected to filter out the noise according to different noise types. For example, for impulse noise, a Median Filter (Median Filter) can be selected to remove noise without blurring the image. Alternatively, for gaussian noise, a Mean Filter (Mean Filter) may be selected to remove noise, but to blur the image to some extent.
Optionally, in another embodiment of the present application, in S102 (based on a preset neural network detection model, performing feature recognition on the ear image to obtain a recognition result), any one or more of the following three manners may be further included:
the first mode is as follows:
based on a neural network detection model, carrying out feature recognition on the ear image to obtain a first feature recognition result corresponding to the ear area;
the second mode is as follows:
based on the neural network detection model, performing feature recognition on the ear image to obtain a second feature recognition result corresponding to the ear shape;
the third mode is as follows:
and carrying out feature recognition on the ear image based on the neural network detection model to obtain a third feature recognition result corresponding to the ear fold degree.
In the application, after the ear image is acquired, whether the ear image is a legal ear image is further confirmed. The method can also be used for identifying the characteristic information of the ear images. To determine the feature recognition result of the ear image. Furthermore, this application can carry out feature recognition to ear's image based on preset neural network detection model, obtains the multiple feature recognition result of corresponding ear area size, ear shape, ear fold degree to make follow-up according to these a plurality of recognition results, judge for target user's identification.
It should be noted that, in the present application, only one of the three feature recognition results may be obtained alone, or a plurality of the three feature recognition results may be obtained.
Further, for the device, after the ear image of the target user is acquired by the camera shooting and collecting device, the feature information of the ear image can be extracted by the neural network model. It should be noted that, the preset neural network model is not specifically limited in the present application, and in a possible implementation, the feature recognition may be performed on the ear image by using a convolutional neural network model.
Among them, Convolutional Neural Networks (CNN) are a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and are one of the representative algorithms of deep learning. The convolutional neural network has a representation learning (representation learning) capability, and can perform translation invariant classification on input information according to a hierarchical structure of the convolutional neural network. The CNN (convolutional neural network) has remarkable effects in the fields of image classification, target detection, semantic segmentation and the like due to the powerful feature characterization capability of the CNN on the image.
Further, the ear feature information in the user image can be extracted by using the CNN neural network model. At least one ear image needs to be input into a preset convolutional neural network model, and the output of the last full connected layer (FC) of the convolutional neural network model is used as feature data corresponding to the ear image. So that the feature identification result corresponding to the ear image is obtained subsequently according to the feature data.
It should be further noted that, before determining the feature data corresponding to each user image by using the convolutional neural network model, the convolutional neural network model needs to be obtained in the following manner:
acquiring a sample image, wherein the sample image comprises at least one ear feature information;
and training a preset image semantic segmentation model by using the sample image to obtain a neural network detection model meeting preset conditions.
Further, the present application may identify, through a neural network image classification model, a sample feature (for example, an area size, a wrinkle feature, a shape feature, and the like) of at least one object included in the sample image. Furthermore, the neural network image classification model may classify each sample feature in the sample image, and classify the sample features belonging to the same category into the same type, so that a plurality of sample features obtained after semantic segmentation of the sample image may be sample features composed of a plurality of different types.
It should be noted that, when the neural network image classification model performs semantic segmentation processing on the sample image, the more accurate the classification of the pixel points in the sample image is, the higher the accuracy rate of identifying the labeled object in the sample image is. It should be noted that the preset condition may be set by a user.
For example, the preset conditions may be set as: the classification accuracy of the pixel points reaches more than 70%, then, the sample image is used for repeatedly training the neural network image classification model, and when the classification accuracy of the neural network image classification model on the pixel points reaches more than 70%, then the neural network image classification model can be applied to the embodiment of the application for carrying out image segmentation on the ear image.
Further optionally, after feature recognition is performed on the ear image based on a preset neural network detection model to obtain a recognition result, the following steps may be further implemented:
matching the recognition result with each feature data in the feature database one by one to obtain a matching result;
the process of matching the recognition result with each feature data in the feature database one by one to obtain the matching result may include the following two cases:
in the first case:
determining an identification type corresponding to the identification result, wherein the identification type is used for representing that the identification result corresponds to the ear area size, and/or the ear shape, and/or the ear wrinkle degree;
and matching the recognition result with each feature data in the feature database one by one based on the recognition type to obtain a matching result.
Further, after the identification result for the ear image of the target user is obtained, the ear area size, and/or the ear shape, and/or the specific type of the ear wrinkle degree can be further determined as the feature corresponding to the identification result. And after the identification type is determined, matching the characteristic data in the characteristic database one by one according to the characteristic corresponding to the identification type, and further obtaining a matching result.
For example, when the identification type corresponding to the current identification result is determined to be the ear area size characteristic and the ear shape characteristic based on the ear image, the ear area size characteristic a and the ear shape characteristic B corresponding to the ear image are determined. And matching the ear area size characteristic A with the characteristic data C in the characteristic database one by one, and generating a matching result of the ear image successfully matched with the characteristic database when the ear area size characteristic A is successfully matched with the characteristic data C in the characteristic database and the ear shape characteristic B is successfully matched with the characteristic data D in the characteristic database.
In the second case:
determining ear information corresponding to the identification result, wherein the ear information is used for representing that the ear image is a left ear image and/or a right ear image;
and matching the recognition result with each feature data in the feature database one by one based on the ear information to obtain a matching result.
And determining the detection result of the target user based on the matching result.
Further, after the identification result for the ear image of the target user is obtained, the left ear organ and/or the right ear organ of the target user corresponding to the identification result can be further determined. And after the corresponding organ is determined, according to the ear information corresponding to the identification type, the ear information is matched with each feature data in the feature database one by one, and then a matching result is obtained.
For example, when it is determined that the ear information corresponding to the current recognition result is a left ear image based on an ear image, that is, according to the left ear image, the left ear image is matched with all left ear feature data in the feature database one by one, it can be understood that when feature left ear feature data existing in the feature database and matched with the left ear image is detected, a matching result of successful matching between the ear image and the feature database is generated.
As illustrated in fig. 3 a-3 b, in the present application, when a detection instruction for a target user is received, an ear image of the target user is obtained (as shown in fig. 3a, an ear image of a left ear of the user, and as shown in fig. 3b, an ear image of a right ear of the user). Further, the method and the device can perform feature recognition on the two ear images based on a preset neural network detection model to obtain a recognition result. The method includes the steps that feature recognition is conducted on an ear portion image of a left ear on the basis of a neural network detection model to obtain a feature recognition result corresponding to the shape of the ear, and feature recognition is conducted on an ear portion image of a right ear on the basis of the neural network detection model to obtain a feature recognition result corresponding to the ear wrinkle degree (corresponding to 1-6 in fig. 3b, and 6 key points reflecting the ear image wrinkle degree). Still further, the method and the device for detecting the identity of the target user can determine the detection result of the identity recognition of the target user based on the recognition results corresponding to the two ear images.
In another embodiment of the present application, as shown in fig. 4, the present application further provides an apparatus for image detection, the apparatus includes an obtaining module 301, a generating module 302, and a determining module 303, wherein:
an obtaining module 301, configured to obtain an ear image of a target user when a detection instruction for the target user is received;
a generating module 302, configured to perform feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result;
a determining module 303 configured to determine a detection result of the target user based on the identification result.
In another embodiment of the present application, the generating module 302 further includes:
a generating module 302, configured to perform gray level correction on the ear image, so as to obtain an ear image to be filtered;
a generating module 302, configured to perform noise filtering on the ear image to be filtered, so as to obtain a target ear image;
the generating module 302 is configured to perform feature recognition on the target ear image based on a preset neural network detection model, so as to obtain a recognition result.
In another embodiment of the present application, the generating module 302 further includes:
a generating module 302, configured to perform feature recognition on the ear image based on the neural network detection model, so as to obtain a first feature recognition result corresponding to the size of the ear area;
and/or the presence of a gas in the gas,
a generating module 302, configured to perform feature recognition on the ear image based on the neural network detection model, so as to obtain a second feature recognition result corresponding to the ear shape;
and/or the presence of a gas in the gas,
a generating module 302, configured to perform feature recognition on the ear image based on the neural network detection model, so as to obtain a third feature recognition result corresponding to the ear wrinkle degree.
In another embodiment of the present application, the determining module 303 further includes:
a determining module 303, configured to match the recognition result with each feature data in a feature database one by one, so as to obtain a matching result;
a determining module 303 configured to determine a detection result of the target user based on the matching result.
In another embodiment of the present application, the generating module 302 further includes:
a generating module 302 configured to determine an identification type corresponding to the identification result, where the identification type is used to characterize that the identification result corresponds to an ear area size, and/or an ear shape, and/or an ear wrinkle degree;
a generating module 302, configured to match the recognition result with each feature data in a feature database one by one based on the recognition type, so as to obtain the matching result.
In another embodiment of the present application, the generating module 302 further includes:
a generating module 302 configured to determine ear information corresponding to the identification result, where the ear information is used to represent that the ear image is a left ear image and/or a right ear image;
a generating module 302, configured to match the identification result with each feature data in a feature database one by one based on the ear information, so as to obtain the matching result.
In another embodiment of the present application, the obtaining module 301 further includes:
an obtaining module 301 configured to obtain a sample image, wherein the sample image includes at least one ear feature information;
an obtaining module 301 configured to train a preset image semantic segmentation model by using the sample image, so as to obtain the neural network detection model meeting a preset condition.
Fig. 5 is a block diagram illustrating a logical structure of an electronic device in accordance with an exemplary embodiment. For example, the electronic device 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, electronic device 400 may include one or more of the following components: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 401 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 402 is configured to store at least one instruction for execution by the processor 401 to implement the interactive special effect calibration method provided by the method embodiments of the present application.
In some embodiments, the electronic device 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402 and peripheral interface 403 may be connected by bus or signal lines. Each peripheral may be connected to the peripheral interface 403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, touch screen display 405, camera 406, audio circuitry 407, positioning components 408, and power supply 409.
The peripheral interface 403 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 401 and the memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 401, the memory 402 and the peripheral interface 403 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to capture touch signals on or over the surface of the display screen 405. The touch signal may be input to the processor 401 as a control signal for processing. At this point, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 405 may be one, providing the front panel of the electronic device 400; in other embodiments, the display screen 405 may be at least two, respectively disposed on different surfaces of the electronic device 400 or in a folded design; in still other embodiments, the display screen 405 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device 400. Even further, the display screen 405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display screen 405 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 407 may also include a headphone jack.
The positioning component 408 is used to locate a current geographic location of the electronic device 400 to implement navigation or LBS (location based Service). The positioning component 408 may be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
The power supply 409 is used to supply power to the various components in the electronic device 400. The power source 409 may be alternating current, direct current, disposable or rechargeable. When power source 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 400 also includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic apparatus 400. For example, the acceleration sensor 411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 401 may control the touch display screen 405 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 411. The acceleration sensor 411 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the electronic device 400, and the gyro sensor 412 may cooperate with the acceleration sensor 411 to acquire a 3D motion of the user on the electronic device 400. From the data collected by the gyro sensor 412, the processor 401 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 413 may be disposed on a side bezel of the electronic device 400 and/or on a lower layer of the touch display screen 405. When the pressure sensor 413 is arranged on the side frame of the electronic device 400, a holding signal of the user to the electronic device 400 can be detected, and the processor 401 performs left-right hand identification or shortcut operation according to the holding signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 414 is used for collecting a fingerprint of the user, and the processor 401 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 414 may be disposed on the front, back, or side of the electronic device 400. When a physical button or vendor Logo is provided on the electronic device 400, the fingerprint sensor 414 may be integrated with the physical button or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 based on the ambient light intensity collected by the optical sensor 415. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 405 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
Proximity sensor 416, also known as a distance sensor, is typically disposed on the front panel of electronic device 400. The proximity sensor 416 is used to capture the distance between the user and the front of the electronic device 400. In one embodiment, the processor 401 controls the touch display screen 405 to switch from the bright screen state to the dark screen state when the proximity sensor 416 detects that the distance between the user and the front surface of the electronic device 400 gradually decreases; when the proximity sensor 416 detects that the distance between the user and the front of the electronic device 400 is gradually increased, the processor 401 controls the touch display screen 405 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 does not constitute a limitation of the electronic device 400, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, there is also provided a non-transitory computer-readable storage medium, such as the memory 404, comprising instructions executable by the processor 420 of the electronic device 400 to perform the method of image detection described above, the method comprising: when a detection instruction for a target user is received, acquiring an ear image of the target user; performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result; and determining the detection result of the target user based on the identification result. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided an application/computer program product comprising one or more instructions executable by the processor 420 of the electronic device 400 to perform the above-described method of image detection, the method comprising: when a detection instruction for a target user is received, acquiring an ear image of the target user; performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result; and determining the detection result of the target user based on the identification result. Optionally, the instructions may also be executable by the processor 420 of the electronic device 400 to perform other steps involved in the exemplary embodiments described above. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of image detection, comprising:
when a detection instruction for a target user is received, acquiring an ear image of the target user;
performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result;
and determining the detection result of the target user based on the identification result.
2. The method of claim 1, further comprising, after said obtaining an ear image of a target user when a detection instruction for the target user is received:
carrying out gray level correction on the ear image to obtain an ear image to be filtered;
carrying out noise filtering on the ear image to be filtered to obtain a target ear image;
and carrying out feature recognition on the target ear image based on a preset neural network detection model to obtain a recognition result.
3. The method as claimed in claim 1 or 2, wherein the performing feature recognition on the ear image based on a preset neural network detection model to obtain a recognition result comprises:
based on the neural network detection model, carrying out feature recognition on the ear image to obtain a first feature recognition result corresponding to the ear area;
and/or the presence of a gas in the gas,
based on the neural network detection model, carrying out feature recognition on the ear image to obtain a second feature recognition result corresponding to the ear shape;
and/or the presence of a gas in the gas,
and based on the neural network detection model, carrying out feature recognition on the ear image to obtain a third feature recognition result corresponding to the ear fold degree.
4. The method of claim 3, wherein said determining a detection result of the target user based on the recognition result comprises:
matching the recognition result with each feature data in a feature database one by one to obtain a matching result;
and determining the detection result of the target user based on the matching result.
5. The method of claim 4, wherein the matching the recognition result with each feature data in a feature database to obtain a matching result comprises:
determining an identification type corresponding to the identification result, wherein the identification type is used for representing that the identification result corresponds to the ear area size, and/or the ear shape, and/or the ear wrinkle degree;
and matching the recognition result with each feature data in a feature database one by one based on the recognition type to obtain the matching result.
6. The method according to claim 4 or 5, wherein the matching the recognition result with each feature data in a feature database to obtain a matching result comprises:
determining ear information corresponding to the identification result, wherein the ear information is used for representing that the ear image is a left ear image and/or a right ear image;
and matching the identification result with each feature data in a feature database one by one based on the ear information to obtain the matching result.
7. The method of claim 1, wherein prior to the feature recognition of the ear image based on a preset neural network detection model, further comprising:
obtaining a sample image, wherein the sample image comprises at least one ear feature information;
and training a preset image semantic segmentation model by using the sample image to obtain the neural network detection model meeting preset conditions.
8. An apparatus for image inspection, comprising:
the device comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring an ear image of a target user when receiving a detection instruction for the target user;
the generating module is set to perform feature recognition on the ear images based on a preset neural network detection model to obtain recognition results;
a determination module configured to determine a detection result of the target user based on the recognition result.
9. An electronic device, comprising:
a memory for storing executable instructions; and the number of the first and second groups,
a processor for display with the memory to execute the executable instructions to perform the operations of the method of image detection of any of claims 1-7.
10. A computer-readable storage medium storing computer-readable instructions that, when executed, perform the operations of the method of image detection of any of claims 1-7.
CN201911085420.3A 2019-11-08 2019-11-08 Image detection method, device, electronic equipment and medium Withdrawn CN111062248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911085420.3A CN111062248A (en) 2019-11-08 2019-11-08 Image detection method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911085420.3A CN111062248A (en) 2019-11-08 2019-11-08 Image detection method, device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN111062248A true CN111062248A (en) 2020-04-24

Family

ID=70297903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911085420.3A Withdrawn CN111062248A (en) 2019-11-08 2019-11-08 Image detection method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111062248A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic equipment and medium
CN112580462A (en) * 2020-12-11 2021-03-30 深圳市豪恩声学股份有限公司 Feature point selection method, terminal and storage medium
CN113011277A (en) * 2021-02-25 2021-06-22 日立楼宇技术(广州)有限公司 Data processing method, device, equipment and medium based on face recognition
CN116913519A (en) * 2023-07-24 2023-10-20 东莞莱姆森科技建材有限公司 Health monitoring method, device, equipment and storage medium based on intelligent mirror

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673340A (en) * 2009-08-13 2010-03-17 重庆大学 Method for identifying human ear by colligating multi-direction and multi-dimension and BP neural network
CN108596193A (en) * 2018-04-27 2018-09-28 东南大学 A kind of method and system for building the deep learning network structure for ear recognition
CN108960076A (en) * 2018-06-08 2018-12-07 东南大学 Ear recognition and tracking based on convolutional neural networks
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101673340A (en) * 2009-08-13 2010-03-17 重庆大学 Method for identifying human ear by colligating multi-direction and multi-dimension and BP neural network
CN108596193A (en) * 2018-04-27 2018-09-28 东南大学 A kind of method and system for building the deep learning network structure for ear recognition
CN108960076A (en) * 2018-06-08 2018-12-07 东南大学 Ear recognition and tracking based on convolutional neural networks
CN109871755A (en) * 2019-01-09 2019-06-11 中国平安人寿保险股份有限公司 A kind of auth method based on recognition of face

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张杰: "基于小样本学习的人耳识别的研究和实现" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic equipment and medium
CN112580462A (en) * 2020-12-11 2021-03-30 深圳市豪恩声学股份有限公司 Feature point selection method, terminal and storage medium
CN113011277A (en) * 2021-02-25 2021-06-22 日立楼宇技术(广州)有限公司 Data processing method, device, equipment and medium based on face recognition
CN113011277B (en) * 2021-02-25 2023-11-21 日立楼宇技术(广州)有限公司 Face recognition-based data processing method, device, equipment and medium
CN116913519A (en) * 2023-07-24 2023-10-20 东莞莱姆森科技建材有限公司 Health monitoring method, device, equipment and storage medium based on intelligent mirror

Similar Documents

Publication Publication Date Title
CN109034102B (en) Face living body detection method, device, equipment and storage medium
CN109948586B (en) Face verification method, device, equipment and storage medium
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN111461097A (en) Method, apparatus, electronic device and medium for recognizing image information
CN112578971B (en) Page content display method and device, computer equipment and storage medium
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN110675473B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN111598896A (en) Image detection method, device, equipment and storage medium
CN111327819A (en) Method, device, electronic equipment and medium for selecting image
CN110659895A (en) Payment method, payment device, electronic equipment and medium
CN111931712B (en) Face recognition method, device, snapshot machine and system
CN111354378B (en) Voice endpoint detection method, device, equipment and computer storage medium
CN112860046B (en) Method, device, electronic equipment and medium for selecting operation mode
CN112819103A (en) Feature recognition method and device based on graph neural network, storage medium and terminal
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN111341317A (en) Method and device for evaluating awakening audio data, electronic equipment and medium
CN110853124A (en) Method, device, electronic equipment and medium for generating GIF dynamic graph
CN111128115B (en) Information verification method and device, electronic equipment and storage medium
CN112214115A (en) Input mode identification method and device, electronic equipment and storage medium
CN111797754A (en) Image detection method, device, electronic equipment and medium
CN111210001A (en) Method and device for adjusting seat, electronic equipment and medium
CN111597468A (en) Social content generation method, device and equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200424