CN111444830A - Imaging method and device based on ultrasonic echo signal, storage medium and electronic device - Google Patents

Imaging method and device based on ultrasonic echo signal, storage medium and electronic device Download PDF

Info

Publication number
CN111444830A
CN111444830A CN202010219890.0A CN202010219890A CN111444830A CN 111444830 A CN111444830 A CN 111444830A CN 202010219890 A CN202010219890 A CN 202010219890A CN 111444830 A CN111444830 A CN 111444830A
Authority
CN
China
Prior art keywords
target
sample
image
group
ultrasonic echo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010219890.0A
Other languages
Chinese (zh)
Other versions
CN111444830B (en
Inventor
郭子毅
梁健
白琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010219890.0A priority Critical patent/CN111444830B/en
Publication of CN111444830A publication Critical patent/CN111444830A/en
Application granted granted Critical
Publication of CN111444830B publication Critical patent/CN111444830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8977Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques using special techniques for image reconstruction, e.g. FFT, geometrical transformations, spatial deconvolution, time deconvolution
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52023Details of receivers
    • G01S7/52036Details of receivers using analysis of echo signal for target characterisation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52046Techniques for image enhancement involving transmitter or receiver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention discloses an imaging method and device based on ultrasonic echo signals, a storage medium and an electronic device. Wherein, the method comprises the following steps: the method comprises the steps of transmitting ultrasonic waves through a loudspeaker on a target terminal, receiving ultrasonic echo signals through a microphone on the target terminal, filtering the ultrasonic echo signals to obtain target echo signals, inputting the target echo signals to a target generator, extracting target characteristic vectors from the target echo signals through the target generator, and inputting the target characteristic vectors into a neural network model to generate the target echo signals. The invention solves the technical problem of higher technical cost of imaging based on the ultrasonic echo signal in the related technology.

Description

Imaging method and device based on ultrasonic echo signal, storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to an imaging method and device based on ultrasonic echo signals, a storage medium and an electronic device.
Background
In the current imaging process based on the ultrasonic echo signal, the related prior art is to perform inverse transformation on a feature vector carried in the ultrasonic echo signal based on physical knowledge to obtain image information. And the image information of the object outside the visual field range can be obtained based on the ultrasonic echo signal imaging mode. However, this method still needs to perform imaging by using multi-channel (in the case where the input terminal and the output terminal are different terminals) sound waves, which has a technical problem that the technical cost for performing imaging based on ultrasonic echo signals is high.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an imaging method and device based on an ultrasonic echo signal, a storage medium and an electronic device, which at least solve the technical problem of higher technical cost of imaging based on the ultrasonic echo signal in the related technology.
According to an aspect of the embodiments of the present invention, there is provided an imaging method based on an ultrasonic echo signal, including: transmitting ultrasonic waves through a loudspeaker on a target terminal, and receiving ultrasonic echo signals through a microphone on the target terminal; filtering the ultrasonic echo signal to obtain a target echo signal; inputting the target echo signal into a target generator, extracting a target feature vector from the target echo signal through the target generator, and inputting the target feature vector into a neural network model to generate a target image, wherein the target image is an image recognized by the target terminal through ultrasonic waves, the target generator is a generator obtained by training an initial generator by using a set of sample pairs, each sample pair comprises a sample ultrasonic echo signal corresponding in time and a sample shooting image, and the sample ultrasonic echo signal is input into the sample generation image obtained by the target generator and the sample shooting image corresponding in time so as not to be distinguished by a target discriminator.
Optionally, the transmitting ultrasonic waves through a speaker on the target terminal and receiving ultrasonic echo signals through a microphone on the target terminal includes: transmitting the ultrasonic wave through a loudspeaker on the target terminal, and receiving the ultrasonic echo signal through a microphone on the target terminal;
optionally, the transmitting the ultrasonic wave through a speaker on the target terminal, and receiving the ultrasonic echo signal through a microphone on the target terminal includes one of: transmitting the ultrasonic waves through a loudspeaker on the target terminal, and receiving the ultrasonic echo signals through a plurality of microphones on the target terminal; transmitting the ultrasonic waves through a plurality of loudspeakers on the target terminal, and receiving the ultrasonic echo signals through a microphone on the target terminal; the ultrasonic waves are transmitted through a plurality of loudspeakers on the target terminal, and the ultrasonic echo signals are received through a plurality of microphones on the target terminal.
Optionally, after inputting the target echo signal to a target generator, extracting a target feature vector from the target echo signal by the target generator, and generating a target image by inputting the target feature vector into a neural network model, the method further comprises: and matching the target image with a pre-acquired image to obtain a matching result, wherein the matching result is used for indicating whether the target image is matched with the predetermined image.
Optionally, after inputting the target echo signal to a target generator, extracting a target feature vector from the target echo signal by the target generator, and generating a target image by inputting the target feature vector into a neural network model, the method further comprises: under the condition that the matching result shows that the target image is not matched with the predetermined image, outputting abnormal prompt information on the target terminal, wherein the abnormal prompt information is used for prompting that imaging based on an ultrasonic echo signal is abnormal; locking a screen of the target terminal under the condition that the matching result shows that the target image is not matched with the predetermined image; and under the condition that the matching result shows that the target image is not matched with the predetermined image, canceling the payment operation currently performed by the target terminal.
Optionally, before transmitting the ultrasonic wave through a speaker on the target terminal and receiving the ultrasonic echo signal through a microphone on the target terminal, the method further comprises: acquiring a first group of ultrasonic echo signals and a first group of shot images which correspond in time; processing the first group of ultrasonic echo signals and the first group of shot images to obtain the group of sample pairs; training the initial generator by using the group of samples to obtain the target generator, wherein a difference value between a first recognition probability and a second recognition probability obtained by inputting the sample generation image and the sample shooting image which correspond in time to the target discriminator is smaller than or equal to a predetermined threshold value, the first recognition probability is used for representing the probability that the sample generation image is a generation image, and the second recognition probability is used for representing the probability that the sample shooting image is a shooting image.
Optionally, the training the initial generator using the set of sample pairs, and obtaining the target generator includes: repeatedly performing the following steps until the difference between the first recognition probability and the second recognition probability output by the target discriminator is less than or equal to the predetermined threshold: inputting the sample ultrasound echo signals in the set of sample pairs into the initial generator, generating the sample generation image; inputting the sample generation image and the sample shooting image corresponding to the sample ultrasonic echo signal in time in the group of sample pairs into the target discriminator to obtain the first identification probability and the second identification probability output by the target discriminator; adjusting a parameter in the initial generator in the event that a difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold.
Optionally, in a case that a difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold, adjusting a parameter in the initial generator includes: under the condition that the difference value between the first recognition probability and the second recognition probability is larger than the preset threshold value, determining the corresponding relation between the sample feature vector contained in the sample ultrasonic echo signal and the geometric image of the target object through the neural network model; and adjusting parameters in the initial generator based on the corresponding relation between the sample characteristic vector and the geometric image of the target object.
Optionally, the processing the first set of ultrasound echo signals and the first set of captured images to obtain the set of sample pairs includes: processing the first group of ultrasonic echo signals to obtain a group of sample ultrasonic echo signals; processing the first group of shot images to obtain a group of sample shot images; and time aligning the group of sample ultrasonic echo signals and the group of sample shooting images to obtain the group of sample pairs.
Optionally, the processing the first set of ultrasound echo signals to obtain a set of sample ultrasound echo signals includes: filtering and rejecting direct sound wave signals in the first group of ultrasonic echo signals to obtain a target reflected sound wave sequence; dividing the target reflected sound wave sequence to obtain a first group of target subsequences with the length being preset length; filtering the first group of target subsequences by using a target filter to obtain a second group of target subsequences; and carrying out normalization processing on the second group of target subsequences to obtain the group of sample ultrasonic echo signals.
Optionally, the processing the first captured image set to obtain the sample captured image set includes: and carrying out image augmentation operation on the first group of shot images to obtain the group of sample shot images, wherein the image augmentation operation comprises increasing noise of the first group of shot images, cutting the first group of shot images and reducing resolution of the group of shot images.
Optionally, after training the initial generator using the set of sample pairs and obtaining the target generator, the method further includes: acquiring a second group of ultrasonic echo signals and a second group of shot images which correspond in time, wherein the second group of ultrasonic echo signals and the second group of shot images are the ultrasonic echo signals and the shot images which are acquired in a preset scene; processing the second group of ultrasonic echo signals and the second group of shot images to obtain a second group of sample pairs; updating the target generator with the second set of sample pairs, or updating the target discriminator with the second set of sample pairs, or updating the target generator and the target discriminator with the second set of sample pairs.
According to another aspect of the embodiments of the present invention, there is also provided an imaging apparatus based on an ultrasonic echo signal, including: the communication module is used for transmitting ultrasonic waves through a loudspeaker on a target terminal and receiving ultrasonic echo signals through a microphone on the target terminal; the filtering module is used for filtering the ultrasonic echo signal to obtain a target echo signal; a generating module, configured to input the target echo signal to a target generator, extract a target feature vector from the target echo signal through the target generator, and input the target feature vector to a neural network model to generate a target image, where the target image is an image recognized by the target terminal through an ultrasonic wave, the target generator is a generator obtained by training an initial generator using a set of sample pairs, each sample pair includes a sample ultrasonic echo signal and a sample captured image that correspond to each other in time, and the sample ultrasonic echo signal is input to the sample generating image obtained by the target generator and the sample captured image that correspond to each other in time so as to be indistinguishable by a target discriminator.
Optionally, the communication module includes: the first communication unit is used for transmitting the ultrasonic wave through a loudspeaker on the target terminal and receiving the ultrasonic echo signal through a microphone on the target terminal.
Optionally, the communication module further comprises at least one of: the second communication unit is used for transmitting the ultrasonic waves through a loudspeaker on the target terminal and receiving the ultrasonic echo signals through a plurality of microphones on the target terminal; the third communication unit is used for transmitting the ultrasonic waves through a plurality of loudspeakers on the target terminal and receiving the ultrasonic echo signals through a microphone on the target terminal; and the fourth communication unit is used for transmitting the ultrasonic waves through a plurality of loudspeakers on the target terminal and receiving the ultrasonic echo signals through a plurality of microphones on the target terminal.
Optionally, the apparatus is further configured to: after the target echo signal is input into a target generator, a target feature vector is extracted from the target echo signal through the target generator, and a target image is generated by inputting the target feature vector into a neural network model, the target image is matched with a pre-acquired image to obtain a matching result, wherein the matching result is used for indicating whether the target image is matched with the pre-determined image.
Optionally, the apparatus is further configured to: after the target echo signal is input to a target generator, a target feature vector is extracted from the target echo signal through the target generator, and a target image is generated through inputting the target feature vector into a neural network model, and in the case that the matching result shows that the target image is not matched with the predetermined image, abnormal prompt information is output on the target terminal, wherein the abnormal prompt information is used for prompting that imaging based on an ultrasonic echo signal is abnormal; locking a screen of the target terminal under the condition that the matching result shows that the target image is not matched with the predetermined image; and under the condition that the matching result shows that the target image is not matched with the predetermined image, canceling the payment operation currently performed by the target terminal.
Optionally, the apparatus is further configured to: before transmitting ultrasonic waves through a loudspeaker on a target terminal and receiving ultrasonic echo signals through a microphone on the target terminal, acquiring a first group of ultrasonic echo signals and a first group of shot images which correspond in time; processing the first group of ultrasonic echo signals and the first group of shot images to obtain the group of sample pairs; training the initial generator by using the group of samples to obtain the target generator, wherein a difference value between a first recognition probability and a second recognition probability obtained by inputting the sample generation image and the sample shooting image which correspond in time to the target discriminator is smaller than or equal to a predetermined threshold value, the first recognition probability is used for representing the probability that the sample generation image is a generation image, and the second recognition probability is used for representing the probability that the sample shooting image is a shooting image.
Optionally, the apparatus is configured to train the initial generator using the set of sample pairs to obtain the target generator by: repeatedly performing the following steps until the difference between the first recognition probability and the second recognition probability output by the target discriminator is less than or equal to the predetermined threshold: inputting the sample ultrasound echo signals in the set of sample pairs into the initial generator, generating the sample generation image; inputting the sample generation image and the sample shooting image corresponding to the sample ultrasonic echo signal in time in the group of sample pairs into the target discriminator to obtain the first identification probability and the second identification probability output by the target discriminator; adjusting a parameter in the initial generator in the event that a difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold.
Optionally, the apparatus is configured to adjust the parameter in the initial generator if the difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold by: under the condition that the difference value between the first recognition probability and the second recognition probability is larger than the preset threshold value, determining the corresponding relation between the sample feature vector contained in the sample ultrasonic echo signal and the geometric image of the target object through the neural network model; and adjusting parameters in the initial generator based on the corresponding relation between the sample characteristic vector and the geometric image of the target object.
Optionally, the apparatus is configured to process the first set of ultrasound echo signals and the first set of captured images to obtain the set of sample pairs by: processing the first group of ultrasonic echo signals to obtain a group of sample ultrasonic echo signals; processing the first group of shot images to obtain a group of sample shot images; and time aligning the group of sample ultrasonic echo signals and the group of sample shooting images to obtain the group of sample pairs.
Optionally, the apparatus is configured to process the first set of ultrasound echo signals to obtain a set of sample ultrasound echo signals, including: filtering and rejecting direct sound wave signals in the first group of ultrasonic echo signals to obtain a target reflected sound wave sequence; dividing the target reflected sound wave sequence to obtain a first group of target subsequences with the length being preset length; filtering the first group of target subsequences by using a target filter to obtain a second group of target subsequences; and carrying out normalization processing on the second group of target subsequences to obtain the group of sample ultrasonic echo signals.
Optionally, the apparatus is configured to process the first captured set of images to obtain a set of captured images of the sample, and includes: and carrying out image augmentation operation on the first group of shot images to obtain the group of sample shot images, wherein the image augmentation operation comprises increasing noise of the first group of shot images, cutting the first group of shot images and reducing resolution of the group of shot images.
Optionally, the apparatus is further configured to: after the initial generator is trained by using the group of samples to obtain the target generator, acquiring a second group of ultrasonic echo signals and a second group of shot images which correspond in time, wherein the second group of ultrasonic echo signals and the second group of shot images are the ultrasonic echo signals and the shot images which are acquired in a preset scene; processing the second group of ultrasonic echo signals and the second group of shot images to obtain a second group of sample pairs; updating the target generator with the second set of sample pairs, or updating the target discriminator with the second set of sample pairs, or updating the target generator and the target discriminator with the second set of sample pairs.
According to a further aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned imaging method based on ultrasound echo signals when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the above imaging method based on the ultrasonic echo signal through the computer program.
In the embodiment of the invention, the ultrasonic wave is emitted through a loudspeaker on the target terminal, and the ultrasonic echo signal is received through a microphone on the target terminal, filtering the ultrasonic echo signal to obtain a target echo signal, inputting the target echo signal to a target generator, the target generator extracts the target characteristic vector from the target echo signal, and the target characteristic vector is input into the neural network model to generate the target image, so that the imaging of an object outside a visual field range can be obtained only through multi-channel sound waves in the prior art is replaced, the technical effects of improving the technical applicability of the imaging based on the ultrasonic echo signal and reducing the imaging cost based on the ultrasonic echo signal are achieved, and the technical problem that the technical cost for imaging based on the ultrasonic echo signals in the related technology is high is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment of an alternative ultrasound echo signal based imaging method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an imaging method based on ultrasonic echo signals according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a method of imaging based on ultrasonic echo signals in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of another imaging method based on ultrasonic echo signals according to an embodiment of the invention;
FIG. 5 is a schematic diagram of yet another method of imaging based on ultrasonic echo signals in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of yet another method of imaging based on ultrasonic echo signals in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram of yet another method of imaging based on ultrasonic echo signals in accordance with an embodiment of the present invention;
FIG. 8 is a schematic diagram of yet another method of imaging based on ultrasonic echo signals in accordance with an embodiment of the present invention;
FIG. 9 is a flow chart illustrating another method of imaging based on ultrasound echo signals in accordance with an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an alternative imaging apparatus based on ultrasonic echo signals according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial nouns or terms referred to in the embodiments of the present application will be described:
ultrasonic echo signals: after the ultrasonic sound wave is emitted, the ultrasonic sound wave signal is reflected from the object.
Generating a network: a network of feature transformations on the input samples is implemented by the generator.
And (3) network discrimination: and the network to which the input sample belongs is judged through the discriminator.
Generate a Generic Adaptive Network (GAN): and generating a network of samples and approximating the distribution of the real data samples based on the noise vector.
The invention is illustrated below with reference to examples:
according to an aspect of the embodiment of the present invention, an imaging method based on an ultrasonic echo signal is provided, and optionally, in the present embodiment, the imaging method based on an ultrasonic echo signal may be applied to a hardware environment formed by a server 101 and a user terminal 103 as shown in fig. 1. As shown in fig. 1, a server 101 is connected to a terminal 103 through a network, which may be used to provide services (such as application services, conference services, game services, etc.) for a user terminal or a client installed on the user terminal, and a database 105 may be provided on the server or separately from the server, and may be used to provide data storage services for the server 101, and the network includes but is not limited to: the user terminal 103 is not limited to a PC, a mobile phone, a tablet PC, or the like, and the imaging application 107 based on the ultrasonic echo signal is displayed by the user terminal 103, and the imaging service based on the ultrasonic echo signal may be used through an entrance of the imaging application 107 based on the ultrasonic echo signal configured on the terminal.
Optionally, as an optional implementation manner, as shown in fig. 2, the imaging method based on the ultrasonic echo signal includes:
s202, transmitting ultrasonic waves through a loudspeaker on a target terminal, and receiving ultrasonic echo signals through a microphone on the target terminal;
s204, filtering the ultrasonic echo signal to obtain a target echo signal;
and S206, inputting a target echo signal into a target generator, extracting a target feature vector from the target echo signal through the target generator, and inputting the target feature vector into a neural network model to generate a target image, wherein the target image is an image identified by the target terminal through ultrasonic waves, the target generator is a generator obtained by training an initial generator by using a group of sample pairs, each sample pair comprises a sample ultrasonic echo signal corresponding in time and a sample shooting image, and the sample generated image obtained by inputting the sample ultrasonic echo signal into the target generator and the sample shooting image corresponding in time cannot be distinguished by a target discriminator.
Optionally, in this embodiment, the target terminal may include, but is not limited to, a PC, a mobile phone, a tablet computer, and the like, and the filtering process may include, but is not limited to, filtering and removing a direct wave in the ultrasonic echo signal, retaining a reflected wave in the ultrasonic echo signal, and taking the reflected wave in the ultrasonic echo signal as the target echo signal.
Alternatively, in this embodiment, fig. 3 is a schematic diagram of another imaging method based on an ultrasonic echo signal according to an embodiment of the present invention, as shown in fig. 3, the imaging method based on an ultrasonic echo signal may be applied to an identification scenario based on an ultrasonic echo, including but not limited to turing shield, where a user, during a process of inputting a password or a verification code by a mobile terminal, assumes that a face of the user is facing a camera at the top of a mobile phone most of the time, a sound wave is transmitted through an earpiece at the position of the camera at the top of the mobile phone, and a microphone at the position of the camera at the top of the mobile phone is used to receive a sound. The acoustic echo data are reported and then used for training an identity verification model of the user, and then whether a person who inputs a password or a verification code is a mobile phone owner is verified through the model, so that the identity of an input person who inputs the password is identified and verified, and the safety factor of the terminal is improved.
Optionally, in this embodiment, the imaging method based on the ultrasonic echo signal may be applied to fields including, but not limited to, identity authentication, electronic commerce, data security, financial management, and intelligent hardware, and the like, which are only examples, and a specific application scenario is not limited to the above, and the present invention is not limited thereto.
Alternatively, in this embodiment, the target generator and the target discriminator form a reactance type network, the target generator is responsible for realizing the formation from the sound wave to the generated image, the target generator is responsible for generating the target image corresponding to the target object in the captured image by extracting the feature vector from the ultrasonic echo signal, reshaping the size of the image according to the obtained feature vector, and utilizing convolution operation, up-sampling operation, and the like in the convolutional neural network, the target discriminator is responsible for distinguishing and judging the generated target image from the captured image, and the target generator is capable of making the generated sample confuse the discriminator so as not to distinguish the target image from the captured image.
Alternatively, in the art, the convolutional neural network may be divided into a feature extraction model and a linear classification model; the feature extraction model can obtain a feature vector from an input picture through multilayer convolution operation, and the linear classification model classifies the extracted feature vector; and the target generator recovers a corresponding image by using the inverse operation of the characteristic extraction model and acquiring a characteristic vector carried in the ultrasonic echo signal, wherein the recovered image is the target image generated by the generator.
Optionally, in this embodiment, the process of generating the target image by the target generator may include, but is not limited to, the following steps:
acquiring a feature vector → reshaping an image size → [ multi-layer convolution → > upsampling/deconvolution ] × N → convolution → generation of a target image.
The above-mentioned manner of generating the target image is only an example, and the present invention is not limited thereto.
According to the embodiment, ultrasonic waves are emitted through a loudspeaker on the target terminal, the ultrasonic echo signals are received through a microphone on the target terminal, the ultrasonic echo signals are filtered to obtain target echo signals, the target echo signals are input to a target generator, target characteristic vectors are extracted from the target echo signals through the target generator, the target characteristic vectors are input into a neural network model to generate target images, and the technical problem that the technical cost for imaging based on the ultrasonic echo signals in the related technology is high is solved.
In an alternative embodiment, the transmitting the ultrasonic wave through a speaker on the target terminal and receiving the ultrasonic echo signal through a microphone on the target terminal includes: ultrasonic waves are transmitted through a speaker on the target terminal, and ultrasonic echo signals are received through a microphone on the target terminal.
Optionally, in this embodiment, the one speaker and the one microphone are configured in the same terminal, fig. 4 is a schematic diagram of another imaging method based on an ultrasonic echo signal according to an embodiment of the present invention, as shown in fig. 4, the speaker 402, the microphone 404 and the camera 406 are respectively configured in the same terminal, the positions of the sensors of the speaker 402 and the microphone 404 may be configured on the front, the back, the side and the like of the terminal, and the positions of the speaker 402, the microphone 404 and the camera 406 may include one or more combinations shown in fig. 4, which is not limited in any way.
Optionally, in this embodiment, fig. 5 is a schematic diagram of another imaging method based on an ultrasonic echo signal according to an embodiment of the present invention, and as shown in fig. 5, an ultrasonic echo signal transmitted by a mobile phone may be generated by an echo on a face surface and received by the mobile phone, where example 502 in fig. 5 is a schematic diagram of an ultrasonic wave transmitted by the mobile phone, and example 504 is a schematic diagram of an echo generated on a face surface and received by the mobile phone.
The above is merely an example, and the present invention is not limited to any particular location of the speaker and the microphone.
Through the embodiment, the loudspeaker and the microphone are configured on the same terminal, imaging based on the ultrasonic echo signal can be realized by using a single channel, the applicability of imaging based on the ultrasonic echo signal is increased, and the technical cost of imaging based on the ultrasonic echo signal is reduced.
In an alternative embodiment, the transmitting the ultrasonic wave through a speaker on the target terminal and receiving the ultrasonic echo signal through a microphone on the target terminal includes one of the following: transmitting ultrasonic waves through a loudspeaker on a target terminal, and receiving ultrasonic echo signals through a plurality of microphones on the target terminal; transmitting ultrasonic waves through a plurality of loudspeakers on a target terminal, and receiving ultrasonic echo signals through a microphone on the target terminal; ultrasonic waves are transmitted through a plurality of speakers on the target terminal, and ultrasonic echo signals are received through a plurality of microphones on the target terminal.
Optionally, in this embodiment, the number of the speakers and the microphones may include one or more, and sending and receiving the ultrasonic echo signal through one speaker or one microphone may save the imaging cost based on the ultrasonic echo signal, improve the imaging efficiency based on the ultrasonic echo signal, and have a wide application range; the ultrasonic echo signals are transmitted and received through the loudspeakers or the microphones, so that the imaging quality based on the ultrasonic echo signals can be improved, and the identification image with higher accuracy can be generated.
In an optional embodiment, after inputting the target echo signal to a target generator, extracting a target feature vector from the target echo signal by the target generator, and generating a target image by inputting the target feature vector into a neural network model, the method further includes: and matching the target image with a pre-acquired image to obtain a matching result, wherein the matching result is used for indicating whether the target image is matched with a predetermined image.
Optionally, in this embodiment, the pre-acquired image may be stored in a storage space of the terminal, or may be stored in a database of the server, and when matching is required, the pre-acquired image is called by the terminal.
Optionally, in this embodiment, the matching may include, but is not limited to, determining whether the feature value of the target image is the same as the feature value of the pre-acquired image, whether the color value of the target image is the same as the color value of the pre-acquired image, and whether the preset target number in the target image is the same as the preset target number of the pre-acquired image.
The matching method is only an example, and the invention is not limited to the matching method.
By the embodiment, the imaging method based on the ultrasonic echo signal can be effectively applied to scenes needing image matching, such as a Turing shield owner identification system, an access control system, a ticket checking system and the like, and the technical effect of improving the applicability of the imaging method based on the ultrasonic echo signal can be achieved.
In an optional embodiment, after inputting the target echo signal to a target generator, extracting a target feature vector from the target echo signal by the target generator, and generating a target image by inputting the target feature vector into a neural network model, the method further comprises: under the condition that the matching result shows that the target image is not matched with the predetermined image, outputting abnormal prompt information on the target terminal, wherein the abnormal prompt information is used for prompting that imaging based on the ultrasonic echo signal is abnormal; under the condition that the matching result shows that the target image is not matched with the predetermined image, locking a screen of the target terminal; and in the case that the matching result shows that the target image does not match the predetermined image, canceling the payment operation currently performed by the target terminal.
Optionally, in this embodiment, the above technical solution may be applied to a ticket checking system for trains, airports, movie theaters, scenic spots, and the like, and the image information of the people who pass through the ticket checking system is collected in advance and stored in the server, when the people who pass through the ticket checking system pass through a terminal for imaging based on the ultrasonic echo signal, the ultrasonic echo information is collected, thereby realizing person identification for the people who pass through the ticket checking system, and when the matching result indicates that the target image is not matched with the predetermined image, the abnormal prompt information is output on the target terminal, thereby achieving full-automatic ticket checking, and the people who pass through the ticket checking system do not need to carry a prop for passing through the ticket checking system, simplifying the ticket checking process, and improving the technical effect of.
Optionally, in this embodiment, the technical solution may also be applied to a screen locking protection interface, and when other users use the terminal to perform imaging unlocking based on an ultrasonic echo signal, the screen of the target terminal may be locked under the condition that the matching result indicates that the target image is not matched with the predetermined image by collecting image information of a terminal owner in advance and storing the image information in the terminal, so as to achieve the technical effects of protecting terminal privacy and avoiding leakage of terminal data.
Optionally, in this embodiment, the above technical solution may also be applied to a payment scenario, and when the matching result indicates that the target image is not matched with the predetermined image, the payment operation currently performed by the target terminal is cancelled, so as to achieve the technical effects of protecting user properties and enriching technical means for protecting properties.
In an optional embodiment, before transmitting the ultrasonic waves through a speaker on the target terminal and receiving the ultrasonic echo signal through a microphone on the target terminal, the method further comprises: acquiring a first group of ultrasonic echo signals and a first group of shot images which correspond in time; processing the first group of ultrasonic echo signals and the first group of shot images to obtain a group of sample pairs; and training the initial generator by using a group of sample pairs to obtain a target generator, wherein the difference value between a first recognition probability and a second recognition probability obtained by inputting the corresponding sample generation image and the sample shooting image to the target discriminator is smaller than or equal to a preset threshold value, the first recognition probability is used for representing the probability that the sample generation image is the generation image, and the second recognition probability is used for representing the probability that the sample shooting image is the shooting image.
Optionally, in this embodiment, the determining the time-corresponding first group of ultrasound echo signals and the first group of captured images may include, but is not limited to, determining the time-corresponding first group of ultrasound echo signals and the first group of captured images by comparing a time stamp of the first ultrasound echo signals with a time stamp of the first captured images.
Alternatively, in this embodiment, the difference between the first recognition probability and the second recognition probability being less than or equal to the predetermined threshold may be represented by a variation of a loss function, for example:
Figure BDA0002425709490000161
equation 1 generates the countering network loss function
Wherein, G and D correspond to the target generator and the target discriminator respectively; x represents the captured image data, z represents the noise data of the ultrasonic echo signal, D (x) represents the second recognition probability and D (g (z)) represents the first recognition probability, g (z) represents the image recovered from the acoustic wave, p (x) represents the distribution of the image generated by the target generator for which the target discriminator discriminates the real image, p (z) represents the distribution of the captured image for which the target discriminator discriminates the real image, Ex~p(x)Representing an average of loss functions of the object discriminator for determining the real image as the photographed image, Ez~p(z)And the target obtaining discriminator determines that the real image is the average value of the loss functions of the images generated by the target generator, and determines to obtain the target generator under the condition that the loss functions are converged to a preset threshold, wherein the difference value between the first recognition probability and the second recognition probability in the formula 1 is smaller than or equal to a preset threshold.
When it is determined that the conversion amount of the loss function reaches the critical value, it is determined that a difference between the first recognition probability and the second recognition probability obtained by inputting the sample generation image and the sample captured image corresponding in time to the target discriminator is smaller than or equal to a predetermined threshold, for example, the predetermined threshold is set to 0.3, and when the first recognition probability output by the target discriminator is 0.6 and the second recognition probability is 0.4, the difference between the first recognition probability and the second recognition probability is 0.2, at this time, since the difference between the first recognition probability and the second recognition probability is smaller than the predetermined threshold, the target discriminator cannot distinguish whether the image generated by the generator is the real image or the captured image, that is, the training of the initial generator is finished, and the target generator is obtained.
By the embodiment, the training of the initial generator can be completed, the obtained target generator can generate the target image which cannot be distinguished by the target discriminator, the training of the initial generator is completed, and the technical effect of the target generator is obtained.
In an alternative embodiment, training the initial generator using a set of sample pairs, and obtaining the target generator comprises: repeatedly executing the following steps until the difference between the first recognition probability and the second recognition probability output by the target discriminator is less than or equal to a predetermined threshold: inputting sample ultrasonic echo signals in a group of sample pairs into an initial generator to generate a sample generation image; inputting a sample generation image and a sample shooting image corresponding to the sample ultrasonic echo signal in time in a group of sample pairs into a target discriminator to obtain a first identification probability and a second identification probability output by the target discriminator; in the event that the difference between the first recognition probability and the second recognition probability is greater than a predetermined threshold, the parameters in the initial generator are adjusted.
Optionally, in this embodiment, fig. 6 is a schematic diagram of an imaging method based on an ultrasonic echo signal according to an embodiment of the present invention, and as shown in fig. 6, the specific flow steps for performing model training are as follows:
s602, acquiring a group of sample pairs;
s604, inputting the sample ultrasonic echo signals in a group of sample pairs into an initial generator to generate a sample generation image;
s606, inputting the sample generation image and a sample shooting image corresponding to the sample ultrasonic echo signal in time in a group of sample pairs into a target discriminator;
s608, judging whether the object discriminator can distinguish the sample generation image from the sample shooting image, if so, executing the step S610, otherwise, returning to the step S604;
and S610, ending.
Through the steps, training of the generation countermeasure network is completed, so that the target generator can output a generated image which cannot be distinguished by the target discriminator.
Optionally, in this embodiment, fig. 7 is a schematic diagram of an imaging method based on an ultrasonic echo signal according to an embodiment of the present invention, and as shown in fig. 7, a structure diagram specifically used for model training is as follows:
the method comprises the steps that an obtained sample ultrasonic echo signal is output to a generator G702, the generator G702 generates a sample generation image and inputs the sample generation image to a discriminator D704, the discriminator D704 judges a received image, when the received image is judged to be the sample generation image, the judgment result is recorded to be false, when the received image is judged to be a shot image, the judgment result is recorded to be true, after the training process is repeated for multiple times, the generator outputs a first identification probability according to the number of times of recording the judgment result to be false, and outputs a second identification probability according to the number of times of recording the judgment result to be true.
By the embodiment, the generation countermeasure network structure is adopted, the composition from sound waves to plane images is realized by training the generator, and the generation images and the real images are distinguished and judged by the discriminator, the generator aims to enable the generated samples to confuse the discriminator so as to enable the generated samples not to be distinguished from the real samples, and the discriminator aims to distinguish the real samples from the generated samples. Through mutual game of the two, the final aim is to enable the generation network to produce the technical effect that the discriminator cannot distinguish true and false samples.
In an alternative embodiment, in the case that the difference between the first recognition probability and the second recognition probability is greater than a predetermined threshold, adjusting the parameter in the initial generator includes: under the condition that the difference value between the first recognition probability and the second recognition probability is larger than the preset threshold value, determining the corresponding relation between the sample feature vector contained in the sample ultrasonic echo signal and the geometric image of the target object through the neural network model; and adjusting parameters in the initial generator based on the corresponding relation between the sample characteristic vector and the geometric image of the target object.
Optionally, in this embodiment, the parameter may include, but is not limited to, a correspondence between a sample feature vector included in the ultrasonic echo and a geometric image of the target object, where the sample feature vector includes geometric shape information of the target object, the sample feature vector is transformed based on physical knowledge to obtain a corresponding geometric image of the target object, the sample feature vector may include, but is not limited to, distance information between the terminal and the target object, and may further include, but is not limited to, shape and material information of the target object itself, and the initial generator is adjusted by extracting the sample feature vector and obtaining the geometric image of the corresponding target object.
Through the embodiment, the image generated by the generator according to the ultrasonic echo signal can better accord with the physical rule of the corresponding target object, so that the target discriminator is more difficult to distinguish the real image from the sample generated image, and the technical effect of improving the imaging accuracy based on the ultrasonic echo signal is achieved.
In an alternative embodiment, processing the first set of ultrasound echo signals and the first set of captured images to obtain a set of sample pairs comprises: processing the first group of ultrasonic echo signals to obtain a group of sample ultrasonic echo signals; processing the first group of shot images to obtain a group of sample shot images; and time alignment is carried out on the group of sample ultrasonic echo signals and the group of sample shooting images to obtain a group of sample pairs.
Optionally, in this embodiment, the time-aligning the set of sample ultrasound echo signals and the set of sample captured images may include, but is not limited to, acquiring a first set of timestamps corresponding to the first set of ultrasound echo signals and a second set of timestamps corresponding to the first set of captured images, acquiring the corresponding relation of a group of sample ultrasonic echo signals and a group of sample shooting images in time according to the corresponding relation of the first group of time stamps and the second group of time stamps in time, wherein, the corresponding relation of the first group of time stamps and the second group of time stamps in time comprises that the difference value of the first time stamp and the second time stamp is less than the difference value of the first time stamp and other time stamps, determining a corresponding relation of a first time stamp and a second time stamp in time, wherein the first group of time stamps comprise the first time stamp, and the second group of time stamps comprise the second time stamp and other time stamps; and determining the sample ultrasonic echo signals and the sample shot images in the corresponding relation as a group of sample pairs according to the corresponding relation of the first group of timestamps and the second group of timestamps in time.
By the embodiment, a group of sample pairs used for inputting the target generator to generate the target image can be generated, so that the technical effects of generating the sample to generate the image, finishing the training of the model, improving the imaging efficiency based on the ultrasonic echo signal and reducing the imaging cost based on the ultrasonic echo signal are achieved.
In an alternative embodiment, processing the first set of ultrasound echo signals to obtain a set of sample ultrasound echo signals includes: filtering and rejecting direct sound wave signals in the first group of ultrasonic echo signals to obtain a target reflected sound wave sequence; dividing the target reflected sound wave sequence to obtain a first group of target subsequences with the length being a preset length; filtering the first group of target subsequences by using a target filter to obtain a second group of target subsequences; and carrying out normalization processing on the second group of target subsequences to obtain a group of sample ultrasonic echo signals.
Alternatively, in the present embodiment, as for the ultrasonic echo signal, the ultrasonic echo signal emitted by the speaker is directly emitted to the microphone in addition to entering the microphone through reflection of the surface of the target body, so that the collected ultrasonic echo signal includes the direct sound wave signal emitted by the speaker and the reflected sound wave signal obtained by reflection via the target object. The direct sound wave signal has a shorter traveling path than the reflected sound wave signal, and the attenuation of the direct sound wave signal is less than that of the reflected sound wave signal, so that the direct sound wave signal needs to be filtered and removed firstly, and the influence of the direct sound wave signal is avoided; and filtering the first group of target subsequences by using the selected filter to obtain a second group of target subsequences, and normalizing the obtained second group of target subsequences by using a sliding window mode to obtain the processed sample ultrasonic echo signal.
The above processing manner of the ultrasonic echo signal is only an example, and the present invention is not limited to this.
By the embodiment, a group of ultrasonic echo signals can be processed, so that the obtained sample ultrasonic echo signals can generate a target image through the target generator, and the applicability of imaging based on the ultrasonic echo signals is improved.
In an alternative embodiment, processing the first set of captured images to obtain a set of sample captured images includes: and carrying out image augmentation operation on the first group of shot images to obtain a group of sample shot images, wherein the image augmentation operation comprises increasing noise of the first group of shot images, cutting the first group of shot images and reducing resolution of the group of shot images.
Alternatively, in this embodiment, the captured images may be regarded as an image stream with a short interval, and therefore, the captured images are cut according to the time stamp to obtain each frame of captured image and the time stamp thereof, and then the captured images are subjected to an augmentation operation (noise addition, center cutting, resolution reduction, etc.).
The above processing manner of the captured image is only an example, and the present invention is not limited to this.
By the embodiment, a group of shot images can be processed, so that the obtained sample shooting can be used for the target discriminator to judge whether the target image generated by the target generator is a real image, the imaging based on the ultrasonic echo signal is further completed, the corresponding sample is convenient to be input into the target generator to generate the target image, and the imaging applicability based on the ultrasonic echo signal is improved.
In an alternative embodiment, after training the initial generator using a set of sample pairs and obtaining the target generator, the method further comprises: acquiring a second group of ultrasonic echo signals and a second group of shot images which correspond in time, wherein the second group of ultrasonic echo signals and the second group of shot images are the ultrasonic echo signals and the shot images which are acquired in a preset scene; processing the second group of ultrasonic echo signals and the second group of shot images to obtain a second group of sample pairs; the target generator is updated with the second set of sample pairs, or the target arbiter is updated, or both the target generator and the target arbiter are updated.
Optionally, in this embodiment, the ultrasound echo signal and the captured image acquired in the preset scene may include a target object image recovered from an ultrasound echo in a simple scene, and a more complex sound wave image (for example, a human face, etc.) in a specific scene. Under this setting, it is impractical to obtain a large number of data samples for a specific task to retrain the model for the task (e.g., human face data, which takes a long time for a large amount of collection, causing user discomfort), but it is feasible to obtain a small number of samples (it is only necessary to invoke the device to complete the collection of hundreds of samples in a few seconds when the user inputs the verification code), and the obtained small number of samples can be used for fine-tuning the basic model. Therefore, the model obtained in the last stage is selected as a basic model, guidance is carried out on a small amount of data collected by a specific task, and the basic generation countermeasure network model is updated.
Through the embodiment, because the basic model learns the simple physical corresponding relation, on the basis, the target generator and/or the target discriminator can be updated only by taking a small number of samples in a preset scene as a guide, so that the technical effect of effectively carrying out imaging based on the ultrasonic echo signals is achieved, the cost of imaging based on the ultrasonic echo signals is reduced, and the applicability of imaging based on the ultrasonic echo signals is improved.
The invention will be further illustrated with reference to specific examples:
optionally, in this embodiment, fig. 8 is a schematic flowchart of another imaging method based on an ultrasonic echo signal according to an embodiment of the present invention, and as shown in fig. 8, the steps of the flowchart are as follows:
s802, adding a field or task which needs to be imaged based on the ultrasonic echo signal;
s804, collecting and processing incremental data according to the acquired field information or task information;
s806, obtaining a pre-trained generated confrontation network model, wherein the generated confrontation network model comprises a generator and a discriminator;
s808, acquiring a group of sample pairs in a preset scene;
s810, generating a sample generation image for confusing a target discriminator according to the sample ultrasonic echo signals in the sample pair;
s812, inputting the sample generation image and the sample shooting image corresponding to the sample ultrasonic echo signal in time in the sample pair into a target discriminator for distinguishing;
s814, judging whether the target discriminator can distinguish the sample generation image from the sample shooting image, if so, jumping to the step S816, otherwise, returning to the step S810, and executing an updating operation on the target discriminator, or executing an updating operation on the target generator, or executing an updating operation on the target discriminator and the target generator;
s816, completing the updating of the generated countermeasure network, and generating a target generator and a target discriminator for ultrasonic echo signal imaging based on the field information or the task information;
s818, determining a target object needing to be imaged based on the ultrasonic echo signal;
s820, transmitting ultrasonic waves through a loudspeaker on a target terminal, and receiving ultrasonic echo signals through a microphone on the target terminal;
s822, processing the ultrasonic echo signal to obtain a target ultrasonic echo signal;
s824, inputting a target ultrasonic echo signal into the target generator for generating a target image, and generating the target image corresponding to the target object based on a convolutional neural network model in the generator;
the above-mentioned obtaining of the sample pair in the preset scene may be as shown in fig. 9, and the specific process is as follows:
s902, obtaining a sample ultrasonic echo signal in a preset scene;
s904, detecting and segmenting the sample ultrasonic echo signal to obtain a subsequence with a preset length;
s906, filtering the subsequence by using a filter, and performing normalization processing on a sliding window to obtain sample ultrasonic echo data;
s908, acquiring a sample shooting image in a preset scene;
s910, intercepting each frame of shot image and a corresponding timestamp;
s912, performing data amplification on the image to obtain sample shooting image data;
s914, time alignment is carried out on the sample ultrasonic echo data and the sample shooting image data, and mapping pairing is determined;
s916, a set of sample pairs is obtained.
The ultrasonic echo signals collected by the same terminal are input into the trained generation countermeasure network model, so that a target image which cannot be identified by a target discriminator in the generation countermeasure network model can be obtained, and then the target image is output to a preset model for identification, for example, a face identification network.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an ultrasound echo signal based imaging apparatus for implementing the above-described ultrasound echo signal based imaging method. As shown in fig. 10, the apparatus includes:
the communication module 1002 is configured to transmit ultrasonic waves through a speaker on the target terminal and receive ultrasonic echo signals through a microphone on the target terminal;
the filtering module 1004 is configured to filter the ultrasonic echo signal to obtain a target echo signal;
a generating module 1006, configured to input a target echo signal into a target generator, extract a target feature vector from the target echo signal through the target generator, and input the target feature vector into a neural network model to generate a target image, where the target image is an image recognized by a target terminal through an ultrasonic wave, the target generator is a generator obtained by training an initial generator using a set of sample pairs, each sample pair includes a sample ultrasonic echo signal corresponding in time and a sample captured image, and the sample generated image obtained by inputting the sample ultrasonic echo signal into the target generator and the sample captured image corresponding in time are such that they cannot be distinguished by a target discriminator.
In an alternative embodiment, communication module 1002 includes: the first communication unit is used for transmitting ultrasonic waves through a loudspeaker on the target terminal and receiving ultrasonic echo signals through a microphone on the target terminal.
In an alternative embodiment, communication module 1002 further includes at least one of:
the second communication unit is used for transmitting ultrasonic waves through a loudspeaker on the target terminal and receiving ultrasonic echo signals through a plurality of microphones on the target terminal;
the third communication unit is used for transmitting ultrasonic waves through a plurality of loudspeakers on the target terminal and receiving ultrasonic echo signals through a microphone on the target terminal;
and the fourth communication unit is used for transmitting ultrasonic waves through a plurality of loudspeakers on the target terminal and receiving ultrasonic echo signals through a plurality of microphones on the target terminal.
In an alternative embodiment, the apparatus is further configured to: after a target echo signal is input into a target generator, a target characteristic vector is extracted from the target echo signal through the target generator, and a target image is generated by inputting the target characteristic vector into a neural network model, the target image is matched with a pre-acquired image to obtain a matching result, wherein the matching result is used for indicating whether the target image is matched with a predetermined image.
In an alternative embodiment, the apparatus is further configured to: after a target echo signal is input into a target generator, a target characteristic vector is extracted from the target echo signal through the target generator, and a target image is generated by inputting the target characteristic vector into a neural network model, and in the case that a matching result shows that the target image is not matched with a predetermined image, abnormal prompt information is output on a target terminal, wherein the abnormal prompt information is used for prompting that imaging based on an ultrasonic echo signal is abnormal; under the condition that the matching result shows that the target image is not matched with the predetermined image, locking a screen of the target terminal; and in the case that the matching result shows that the target image does not match the predetermined image, canceling the payment operation currently performed by the target terminal.
In an alternative embodiment, the apparatus is further configured to: before transmitting ultrasonic waves through a loudspeaker on a target terminal and receiving ultrasonic echo signals through a microphone on the target terminal, acquiring a first group of ultrasonic echo signals and a first group of shot images which correspond in time; processing the first group of ultrasonic echo signals and the first group of shot images to obtain a group of sample pairs; and training the initial generator by using a group of sample pairs to obtain a target generator, wherein the difference value between a first recognition probability and a second recognition probability obtained by inputting the corresponding sample generation image and the sample shooting image to the target discriminator is smaller than or equal to a preset threshold value, the first recognition probability is used for representing the probability that the sample generation image is the generation image, and the second recognition probability is used for representing the probability that the sample shooting image is the shooting image.
In an alternative embodiment, the apparatus is configured to train the initial generator with a set of sample pairs to obtain the target generator by: repeatedly executing the following steps until the difference between the first recognition probability and the second recognition probability output by the target discriminator is less than or equal to a predetermined threshold: inputting sample ultrasonic echo signals in a group of sample pairs into an initial generator to generate a sample generation image; inputting a sample generation image and a sample shooting image corresponding to the sample ultrasonic echo signal in time in a group of sample pairs into a target discriminator to obtain a first identification probability and a second identification probability output by the target discriminator; the parameters in the initial generator are adjusted in the event that the difference between the first recognition probability and the second recognition probability is greater than a predetermined threshold.
In an alternative embodiment, the above apparatus is configured to adjust the parameter in the initial generator if the difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold value by: under the condition that the difference value between the first recognition probability and the second recognition probability is larger than the preset threshold value, determining the corresponding relation between the sample feature vector contained in the sample ultrasonic echo signal and the geometric image of the target object through the neural network model; and adjusting parameters in the initial generator based on the corresponding relation between the sample characteristic vector and the geometric image of the target object.
In an alternative embodiment, the apparatus is configured to process the first set of ultrasound echo signals and the first set of captured images to obtain a set of sample pairs by: processing the first group of ultrasonic echo signals to obtain a group of sample ultrasonic echo signals; processing the first group of shot images to obtain a group of sample shot images; and time alignment is carried out on the group of sample ultrasonic echo signals and the group of sample shooting images to obtain a group of sample pairs.
In an optional embodiment, the apparatus is configured to process the first set of ultrasound echo signals to obtain a set of sample ultrasound echo signals, and includes: filtering and rejecting direct sound wave signals in the first group of ultrasonic echo signals to obtain a target reflected sound wave sequence; dividing the target reflected sound wave sequence to obtain a first group of target subsequences with the length being a preset length; filtering the first group of target subsequences by using a target filter to obtain a second group of target subsequences; and carrying out normalization processing on the second group of target subsequences to obtain a group of sample ultrasonic echo signals.
In an alternative embodiment, the apparatus is configured to process the first captured image to obtain a set of sample captured images by: and carrying out image augmentation operation on the first group of shot images to obtain a group of sample shot images, wherein the image augmentation operation comprises increasing noise of the first group of shot images, cutting the first group of shot images and reducing resolution of the group of shot images.
In an alternative embodiment, the apparatus is further configured to: after the initial generator is trained by using a group of samples to obtain a target generator, acquiring a second group of ultrasonic echo signals and a second group of shot images which correspond in time, wherein the second group of ultrasonic echo signals and the second group of shot images are the ultrasonic echo signals and the shot images which are acquired in a preset scene; processing the second group of ultrasonic echo signals and the second group of shot images to obtain a second group of sample pairs; the target generator is updated with the second set of sample pairs, or the target arbiter is updated with the second set of sample pairs, or both the target generator and the target arbiter are updated with the second set of sample pairs.
According to a further aspect of an embodiment of the present invention, there is also provided an electronic device for implementing the above-mentioned imaging method based on ultrasonic echo signals, the electronic device including a memory in which a computer program is stored and a processor configured to execute the steps in any of the above-mentioned method embodiments by the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, transmitting ultrasonic waves through a loudspeaker on the target terminal and receiving ultrasonic echo signals through a microphone on the target terminal;
s2, filtering the ultrasonic echo signal to obtain a target echo signal;
and S3, inputting a target echo signal into a target generator, extracting a target feature vector from the target echo signal through the target generator, and inputting the target feature vector into a neural network model to generate a target image, wherein the target image is an image identified by ultrasonic waves of a target terminal, the target generator is a generator obtained by training an initial generator by using a group of sample pairs, each sample pair comprises a sample ultrasonic echo signal corresponding in time and a sample shooting image, and the sample ultrasonic echo signal is input into the target generator to obtain a sample generation image and a sample shooting image corresponding in time so as not to be distinguished by a target discriminator.
Alternatively, as will be understood by those skilled in the art, the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. The present invention is not limited to the structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.), or have a different configuration than that described herein.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for imaging based on ultrasound echo signals in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software programs and modules stored in the memory, that is, the method for imaging based on ultrasound echo signals is implemented. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory may be specifically, but not limited to, used for storing information such as a preset image and a target image. As an example, the memory may include, but is not limited to, the communication module 802, the filtering module 804, and the generating module 806 in the imaging apparatus based on the ultrasound echo signal. In addition, the device may further include, but is not limited to, other module units in the above apparatus for acquiring a virtual item, which is not described in detail in this example.
Optionally, the transmission device is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device includes a network adapter (NIC) that can be connected to the router via a network cable and other network devices to communicate with the internet or a local area network. In one example, the transmission device is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In addition, the electronic device further includes: the display is used for displaying the application interface; and a connection bus for connecting the respective module parts in the electronic apparatus.
According to a further aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, transmitting ultrasonic waves through a loudspeaker on the target terminal and receiving ultrasonic echo signals through a microphone on the target terminal;
s2, filtering the ultrasonic echo signal to obtain a target echo signal;
and S3, inputting a target echo signal into a target generator, extracting a target feature vector from the target echo signal through the target generator, and inputting the target feature vector into a neural network model to generate a target image, wherein the target image is an image identified by ultrasonic waves of a target terminal, the target generator is a generator obtained by training an initial generator by using a group of sample pairs, each sample pair comprises a sample ultrasonic echo signal corresponding in time and a sample shooting image, and the sample ultrasonic echo signal is input into the target generator to obtain a sample generation image and a sample shooting image corresponding in time so as not to be distinguished by a target discriminator.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (15)

1. An imaging method based on ultrasonic echo signals, comprising:
transmitting ultrasonic waves through a loudspeaker on a target terminal, and receiving ultrasonic echo signals through a microphone on the target terminal;
filtering the ultrasonic echo signal to obtain a target echo signal;
inputting the target echo signal into a target generator, extracting a target feature vector from the target echo signal through the target generator, and inputting the target feature vector into a neural network model to generate a target image, wherein the target image is an image recognized by the target terminal through ultrasonic waves, the target generator is a generator obtained by training an initial generator by using a set of sample pairs, each sample pair comprises a sample ultrasonic echo signal corresponding in time and a sample shooting image, and the sample ultrasonic echo signal is input into the sample generation image obtained by the target generator and the sample shooting image corresponding in time so as not to be distinguished by a target discriminator.
2. The method of claim 1, wherein transmitting ultrasonic waves through a speaker on a target terminal and receiving ultrasonic echo signals through a microphone on the target terminal comprises:
transmitting the ultrasonic wave through a speaker on the target terminal, and receiving the ultrasonic echo signal through a microphone on the target terminal.
3. The method of claim 1, wherein transmitting the ultrasonic waves through a speaker on the target terminal and receiving the ultrasonic echo signals through a microphone on the target terminal comprises one of:
transmitting the ultrasonic waves through a loudspeaker on the target terminal, and receiving the ultrasonic echo signals through a plurality of microphones on the target terminal;
transmitting the ultrasonic waves through a plurality of loudspeakers on the target terminal, and receiving the ultrasonic echo signals through a microphone on the target terminal;
the ultrasonic waves are transmitted through a plurality of loudspeakers on the target terminal, and the ultrasonic echo signals are received through a plurality of microphones on the target terminal.
4. The method of claim 1, wherein after inputting the target echo signal to a target generator, extracting a target feature vector from the target echo signal by the target generator, and generating a target image by inputting the target feature vector to a neural network model, the method further comprises:
and matching the target image with a pre-acquired image to obtain a matching result, wherein the matching result is used for indicating whether the target image is matched with the predetermined image.
5. The method of claim 4, wherein after inputting the target echo signal to a target generator, extracting a target feature vector from the target echo signal by the target generator, and generating a target image by inputting the target feature vector to a neural network model, the method further comprises:
under the condition that the matching result shows that the target image is not matched with the predetermined image, outputting abnormal prompt information on the target terminal, wherein the abnormal prompt information is used for prompting that imaging based on an ultrasonic echo signal is abnormal;
locking a screen of the target terminal under the condition that the matching result shows that the target image is not matched with the predetermined image;
and under the condition that the matching result shows that the target image is not matched with the predetermined image, canceling the payment operation currently performed by the target terminal.
6. The method of any one of claims 1 to 5, wherein before transmitting the ultrasonic waves through a speaker on a target terminal and receiving the ultrasonic echo signals through a microphone on the target terminal, the method further comprises:
acquiring a first group of ultrasonic echo signals and a first group of shot images which correspond in time;
processing the first group of ultrasonic echo signals and the first group of shot images to obtain the group of sample pairs;
training the initial generator by using the group of samples to obtain the target generator, wherein a difference value between a first recognition probability and a second recognition probability obtained by inputting the sample generation image and the sample shooting image which correspond in time to the target discriminator is smaller than or equal to a predetermined threshold value, the first recognition probability is used for representing the probability that the sample generation image is a generation image, and the second recognition probability is used for representing the probability that the sample shooting image is a shooting image.
7. The method of claim 6, wherein training the initial generator using the set of sample pairs to obtain the target generator comprises:
repeatedly performing the following steps until the difference between the first recognition probability and the second recognition probability output by the target discriminator is less than or equal to the predetermined threshold:
inputting the sample ultrasound echo signals in the set of sample pairs into the initial generator, generating the sample generation image;
inputting the sample generation image and the sample shooting image corresponding to the sample ultrasonic echo signal in time in the group of sample pairs into the target discriminator to obtain the first identification probability and the second identification probability output by the target discriminator;
adjusting a parameter in the initial generator in the event that a difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold.
8. The method of claim 6, wherein adjusting the parameters in the initial generator in the case that the difference between the first recognition probability and the second recognition probability is greater than the predetermined threshold comprises:
under the condition that the difference value between the first recognition probability and the second recognition probability is larger than the preset threshold value, determining the corresponding relation between the sample feature vector contained in the sample ultrasonic echo signal and the geometric image of the target object through the neural network model;
and adjusting parameters in the initial generator based on the corresponding relation between the sample characteristic vector and the geometric image of the target object.
9. The method of claim 6, wherein said processing the first set of ultrasound echo signals and the first set of captured images to obtain the set of sample pairs comprises:
processing the first group of ultrasonic echo signals to obtain a group of sample ultrasonic echo signals;
processing the first group of shot images to obtain a group of sample shot images;
and time aligning the group of sample ultrasonic echo signals and the group of sample shooting images to obtain the group of sample pairs.
10. The method of claim 9, wherein said processing the first set of ultrasound echo signals to obtain a set of sample ultrasound echo signals comprises:
filtering and rejecting direct sound wave signals in the first group of ultrasonic echo signals to obtain a target reflected sound wave sequence;
dividing the target reflected sound wave sequence to obtain a first group of target subsequences with the length being preset length;
filtering the first group of target subsequences by using a target filter to obtain a second group of target subsequences;
and carrying out normalization processing on the second group of target subsequences to obtain the group of sample ultrasonic echo signals.
11. The method of claim 9, wherein said processing the first set of captured images to obtain a set of sample captured images comprises:
and carrying out image augmentation operation on the first group of shot images to obtain the group of sample shot images, wherein the image augmentation operation comprises increasing noise of the first group of shot images, cutting the first group of shot images and reducing resolution of the group of shot images.
12. The method of claim 6, wherein after training the initial generator using the set of sample pairs to obtain the target generator, the method further comprises:
acquiring a second group of ultrasonic echo signals and a second group of shot images which correspond in time, wherein the second group of ultrasonic echo signals and the second group of shot images are the ultrasonic echo signals and the shot images which are acquired in a preset scene;
processing the second group of ultrasonic echo signals and the second group of shot images to obtain a second group of sample pairs;
updating the target generator with the second set of sample pairs, or updating the target discriminator with the second set of sample pairs, or updating the target generator and the target discriminator with the second set of sample pairs.
13. An imaging apparatus based on ultrasonic echo signals, comprising:
the communication module is used for transmitting ultrasonic waves through a loudspeaker on a target terminal and receiving ultrasonic echo signals through a microphone on the target terminal;
the filtering module is used for filtering the ultrasonic echo signal to obtain a target echo signal;
a generating module, configured to input the target echo signal to a target generator, extract a target feature vector from the target echo signal through the target generator, and input the target feature vector to a neural network model to generate a target image, where the target image is an image recognized by the target terminal through an ultrasonic wave, the target generator is a generator obtained by training an initial generator using a set of sample pairs, each sample pair includes a sample ultrasonic echo signal and a sample captured image that correspond to each other in time, and the sample ultrasonic echo signal is input to the sample generating image obtained by the target generator and the sample captured image that correspond to each other in time so as to be indistinguishable by a target discriminator.
14. A computer-readable storage medium comprising a stored program, wherein the program when executed performs the method of any of claims 1 to 12.
15. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 12 by means of the computer program.
CN202010219890.0A 2020-03-25 2020-03-25 Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device Active CN111444830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010219890.0A CN111444830B (en) 2020-03-25 2020-03-25 Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010219890.0A CN111444830B (en) 2020-03-25 2020-03-25 Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111444830A true CN111444830A (en) 2020-07-24
CN111444830B CN111444830B (en) 2023-10-31

Family

ID=71648767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010219890.0A Active CN111444830B (en) 2020-03-25 2020-03-25 Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111444830B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153438A (en) * 2020-09-22 2020-12-29 康佳集团股份有限公司 Intelligent television control method based on ultrasonic waves, intelligent television and storage medium
WO2022156562A1 (en) * 2021-01-19 2022-07-28 腾讯科技(深圳)有限公司 Object recognition method and apparatus based on ultrasonic echo, and storage medium
WO2024009111A1 (en) * 2022-07-08 2024-01-11 Rewire Holding Ltd Authentication systems and computer-implemented methods

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101396277A (en) * 2007-09-26 2009-04-01 中国科学院声学研究所 Ultrasonics face recognition method and device
US20130006549A1 (en) * 2011-07-01 2013-01-03 Gronseth Cliff A Method and system for organic specimen feature identification in ultrasound image
CN105550674A (en) * 2016-02-01 2016-05-04 珠海格力电器股份有限公司 Method, device and system of verification of target object identity, and mobile terminal
CN105760825A (en) * 2016-02-02 2016-07-13 深圳市广懋创新科技有限公司 Gesture identification system and method based on Chebyshev feed forward neural network
CN106846306A (en) * 2017-01-13 2017-06-13 重庆邮电大学 A kind of ultrasonoscopy automatic describing method and system
CN108229404A (en) * 2018-01-09 2018-06-29 东南大学 A kind of radar echo signal target identification method based on deep learning
CN110021014A (en) * 2019-03-29 2019-07-16 无锡祥生医疗科技股份有限公司 Nerve fiber recognition methods, system and storage medium neural network based
CN110074813A (en) * 2019-04-26 2019-08-02 深圳大学 A kind of ultrasonic image reconstruction method and system
CN209231947U (en) * 2018-12-29 2019-08-09 航天信息股份有限公司 Ultrasonic wave human face scanning identification device
CN110322399A (en) * 2019-07-05 2019-10-11 深圳开立生物医疗科技股份有限公司 A kind of ultrasound image method of adjustment, system, equipment and computer storage medium
CN110647849A (en) * 2019-09-26 2020-01-03 深圳先进技术研究院 Neural regulation and control result prediction method and device and terminal equipment
CN110688957A (en) * 2019-09-27 2020-01-14 腾讯科技(深圳)有限公司 Living body detection method and device applied to face recognition and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101396277A (en) * 2007-09-26 2009-04-01 中国科学院声学研究所 Ultrasonics face recognition method and device
US20130006549A1 (en) * 2011-07-01 2013-01-03 Gronseth Cliff A Method and system for organic specimen feature identification in ultrasound image
CN105550674A (en) * 2016-02-01 2016-05-04 珠海格力电器股份有限公司 Method, device and system of verification of target object identity, and mobile terminal
CN105760825A (en) * 2016-02-02 2016-07-13 深圳市广懋创新科技有限公司 Gesture identification system and method based on Chebyshev feed forward neural network
CN106846306A (en) * 2017-01-13 2017-06-13 重庆邮电大学 A kind of ultrasonoscopy automatic describing method and system
CN108229404A (en) * 2018-01-09 2018-06-29 东南大学 A kind of radar echo signal target identification method based on deep learning
CN209231947U (en) * 2018-12-29 2019-08-09 航天信息股份有限公司 Ultrasonic wave human face scanning identification device
CN110021014A (en) * 2019-03-29 2019-07-16 无锡祥生医疗科技股份有限公司 Nerve fiber recognition methods, system and storage medium neural network based
CN110074813A (en) * 2019-04-26 2019-08-02 深圳大学 A kind of ultrasonic image reconstruction method and system
CN110322399A (en) * 2019-07-05 2019-10-11 深圳开立生物医疗科技股份有限公司 A kind of ultrasound image method of adjustment, system, equipment and computer storage medium
CN110647849A (en) * 2019-09-26 2020-01-03 深圳先进技术研究院 Neural regulation and control result prediction method and device and terminal equipment
CN110688957A (en) * 2019-09-27 2020-01-14 腾讯科技(深圳)有限公司 Living body detection method and device applied to face recognition and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PINGPING LIAO 等: "Study on Compressed Air Leak Detection Using Ultrasonic Detection Technology and Instrument", 《2011 6TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS》, pages 1690 - 1693 *
苗振伟 等: "超声波人脸识别方法研究", 《声学技术》, vol. 26, no. 5, pages 1060 - 1061 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153438A (en) * 2020-09-22 2020-12-29 康佳集团股份有限公司 Intelligent television control method based on ultrasonic waves, intelligent television and storage medium
WO2022156562A1 (en) * 2021-01-19 2022-07-28 腾讯科技(深圳)有限公司 Object recognition method and apparatus based on ultrasonic echo, and storage medium
WO2024009111A1 (en) * 2022-07-08 2024-01-11 Rewire Holding Ltd Authentication systems and computer-implemented methods

Also Published As

Publication number Publication date
CN111444830B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN111444830B (en) Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device
CN108197586A (en) Recognition algorithms and device
EP1205884B1 (en) Individual authentication method, individual authentication apparatus, information communication apparatus equipped with the apparatus, and individual authentication system including the apparatus
CN111881726B (en) Living body detection method and device and storage medium
JP7026225B2 (en) Biological detection methods, devices and systems, electronic devices and storage media
CN110688957B (en) Living body detection method, device and storage medium applied to face recognition
CN102663444A (en) Method for preventing account number from being stolen and system thereof
CN109697416A (en) A kind of video data handling procedure and relevant apparatus
CN109492550B (en) Living body detection method, living body detection device and related system applying living body detection method
CN111368811B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN110287671A (en) Verification method and device, electronic equipment and storage medium
CN111095246B (en) Method and electronic device for authenticating user
WO2022262606A1 (en) Living body detection method and apparatus, and electronic device and storage medium
CN114187547A (en) Target video output method and device, storage medium and electronic device
CN111079687A (en) Certificate camouflage identification method, device, equipment and storage medium
CN208351494U (en) Face identification system
CN106980836B (en) Identity verification method and device
CN112769872B (en) Conference system access method and system based on audio and video feature fusion
CN213601611U (en) Law enforcement appearance with voiceprint recognition function
CN112601054B (en) Pickup picture acquisition method and device, storage medium and electronic equipment
CN113705428A (en) Living body detection method and apparatus, electronic device, and computer-readable storage medium
CN113989870A (en) Living body detection method, door lock system and electronic equipment
CN114821820A (en) Living body detection method, living body detection device, computer equipment and storage medium
CN112969053A (en) In-vehicle information transmission method and device, vehicle-mounted equipment and storage medium
WO2022156562A1 (en) Object recognition method and apparatus based on ultrasonic echo, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025898

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant