CN114723933A - Region information generation method and device, electronic equipment and computer readable medium - Google Patents

Region information generation method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN114723933A
CN114723933A CN202011506435.5A CN202011506435A CN114723933A CN 114723933 A CN114723933 A CN 114723933A CN 202011506435 A CN202011506435 A CN 202011506435A CN 114723933 A CN114723933 A CN 114723933A
Authority
CN
China
Prior art keywords
image
information
target
generating
space vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011506435.5A
Other languages
Chinese (zh)
Inventor
张韵东
孙向东
饶颖
李振华
韩建辉
徐祥
刘小涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd
Original Assignee
Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd filed Critical Chongqing Zhongxing Micro Artificial Intelligence Chip Technology Co ltd
Priority to CN202011506435.5A priority Critical patent/CN114723933A/en
Publication of CN114723933A publication Critical patent/CN114723933A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a region information generation method and device, electronic equipment and a computer readable medium. One embodiment of the method comprises: acquiring an image of a target object as a target image; generating a characteristic vector set based on a target image and a pre-deployed characteristic extraction model; generating a space vector information set according to the characteristic vector set; and inputting the space vector information set into a pre-trained embedded neural network model to obtain the regional information of the target object. The implementation mode can reduce the occupation of the video memory space and the memory space of the computer, thereby reducing the image processing time of the computer and further improving the image processing efficiency of the computer.

Description

Region information generation method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method and a device for generating regional information, electronic equipment and a computer readable medium.
Background
Regional information generation is a basic technology in the field of computer vision. At present, a method for generating region information generally processes an image and extracts feature information, and then processes the extracted feature information to generate region information.
However, when the above method is adopted, there are often technical problems as follows:
firstly, the computer directly processes the image, thereby occupying a large amount of computer video memory space and memory space, causing the increase of the image processing time of the computer and further causing the reduction of the image processing efficiency of the computer;
secondly, because the influence factors generated by the space vector information cannot be comprehensively considered, the generated space vector information is not accurate enough.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a region information generating method, apparatus, electronic device, and computer readable medium to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for generating regional information, the method including: acquiring an image of a target object as a target image; generating a characteristic vector set based on the target image and a pre-deployed characteristic extraction model; generating a space vector information set according to the feature vector set, wherein the space vector information represents a space structure between feature vectors corresponding to the space vector information in the feature vector set; and inputting the space vector information set into a pre-trained embedded neural network model to obtain the regional information of the target object.
In a second aspect, some embodiments of the present disclosure provide an area information generating apparatus, including: an acquisition unit configured to acquire an image of a target object as a target image; a first generation unit configured to generate a feature vector set based on the target image and a pre-deployed feature extraction model; a second generating unit configured to generate a set of space vector information from the set of feature vectors, wherein the space vector information represents a spatial structure between feature vectors in the set of feature vectors corresponding to the space vector information; and the input unit is configured to input the space vector information set into a pre-trained embedded neural network model to obtain the region information of the target object.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement the method as described in the first aspect.
In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method as described in the first aspect.
The above embodiments of the present disclosure have the following advantages: the regional information of the target object is obtained through the regional information generation method of some embodiments of the disclosure, so that the image processing efficiency of the computer is improved. Specifically, the reasons for the decrease in the image processing efficiency of the computer are: because the computer directly processes the image, a large amount of computer video memory space and memory space are occupied, and the image processing time of the computer is increased. Based on this, the area information generating method of some embodiments of the present disclosure, first, acquires an image of a target object as a target image. By acquiring an image including the target object, processing of the target object is thereby facilitated. Secondly, a feature vector set is generated based on the target image and a feature extraction model which is deployed in advance. Therefore, the subsequent image processing does not need to process the whole image, and the extracted feature vector can be processed. Therefore, the occupied space of the computer video memory and the memory is reduced, and the available space of the computer video memory and the memory space is further improved. Then, a set of space vector information is generated according to the set of feature vectors, wherein the space vector information represents a space structure between the feature vectors in the set of feature vectors. Thus, the obtained feature vector set can be converted into space vector information, and thus, a space structure regarding the feature vector set can be constructed. Further, a spatial relationship between the respective feature vectors may be determined. And finally, inputting the space vector information set into a pre-trained embedded neural network model to obtain the region information of the target object. The feature vectors of the target image are extracted, and the region information of the target object is obtained through the pre-trained embedded neural network model. Therefore, the occupation of the video memory space and the memory space of the computer is reduced, and the image processing time of the computer is reduced. Further, the image processing efficiency of the computer is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of one application scenario of a region information generation method according to some embodiments of the present disclosure;
fig. 2 is a flow diagram of some embodiments of a zone information generation method according to the present disclosure;
FIG. 3 is a schematic block diagram of some embodiments of a region information generating apparatus according to the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to the region information generating method of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of one application scenario of a region information generation method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may acquire an image of a target object as a target image 102. The computing device 101 may then generate a set of feature vectors 104 based on the target image 102 and the pre-deployed feature extraction model 103 described above. Then, a space vector information set 105 is generated from the feature vector set 104. The space vector information represents a spatial structure between feature vectors in the feature vector set 104 corresponding to the space vector information. Finally, the space vector information set 105 is input into a pre-trained embedded neural network model 106, and the region information 107 of the target object is obtained.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of a zone information generation method according to the present disclosure is shown. The area information generation method comprises the following steps:
in step 201, an image of a target object is acquired as a target image.
In some embodiments, an execution subject of the region information generation method (e.g., the computing device 101 shown in fig. 1) may acquire an image of a target object in the terminal device by a wired connection manner or a wireless connection manner, and take the image as a target image. The image of the target object may be an image including the target object and a background image. The background image may be an image of the target image excluding the target object.
Step 202, generating a feature vector set based on the target image and the pre-deployed feature extraction model.
In some embodiments, the execution subject may generate a feature vector set based on the target image and a pre-deployed feature extraction model. The feature extraction model may include a radar feature extraction model, a binocular camera specific feature extraction model, and a spatial feature extraction model. The above-described radar feature extraction model may be a model for analyzing the pixel values of the target object. The above-described binocular camera specific feature extraction model may be a model for processing an image. The spatial feature extraction model may be a model for creating spatial vector information. The space vector information may represent a spatial structure between feature vectors in the feature vector set corresponding to the space vector information. The feature vector may be a feature vector extracted from the target image. The extraction may be to extract the feature vector through a CNN (Convolutional Neural Networks) and an RNN (recurrent Neural Networks).
The radar feature extraction model, the binocular camera specific feature extraction model and the spatial feature extraction model can be obtained by training of a convolutional neural network and a deep neural network.
In some optional implementations of some embodiments, the executing subject may generate the feature vector set by:
firstly, performing image graying processing on the target image to obtain the gray value of each pixel point in the target image and the gray value of each background pixel point in a background image corresponding to the target image. The graying process may be to obtain the grayscale value of each pixel in the target image and the grayscale value of each background pixel in the background image corresponding to the target image by using a component method, a maximum value method, an average value method, or a weighted average method.
And secondly, generating a difference value based on the gray value of each pixel point in the target image and the gray value of the background pixel point corresponding to the pixel point in the background image corresponding to the target image to obtain a difference value set.
As an example, the gray value of one pixel point in the target image may be 255. The gray value of the background pixel point corresponding to the pixel point in the background image corresponding to the target image may be 55. The resulting difference is 200.
And thirdly, performing difference detection processing on each difference in the difference set to generate a target point to obtain a target point set. The difference detection process may determine the pixel point corresponding to the difference greater than 200 as the target point. The target point may be a pixel point in the target object.
And fourthly, performing aggregation processing on the target point set to generate a characteristic vector set. The aggregation may be performed by classifying the dispersed sets of target points according to relationships between the target points, and connecting the target points of the same category to generate the feature vector.
Optionally, the executing entity may generate the difference value by:
in the first step, a first predetermined threshold is determined as a difference value in response to the difference between the pixel point and the background pixel point satisfying a first predetermined condition. The first preset condition may be that a difference between the pixel point and the background pixel point is greater than 200. The first predetermined threshold may be 250.
And secondly, determining a second preset threshold as a difference value in response to the fact that the difference between the pixel point and the background pixel point meets a second preset condition. The second preset condition may be that a difference between the pixel point and the background pixel point is less than or equal to 200. The second predetermined threshold may be 0.
Optionally, the executing entity may generate the feature vector set by:
and step one, generating coordinate information of a central point of a target object connected region based on the target point set. The target connected region center point may be a center point of a region including the target object. The coordinate information of the center point of the target object connected region may be coordinate information of the center point of a region including the target object.
And secondly, generating parameter information of the target object connected region based on the coordinate information of the target object connected region and the central point of the target object connected region. The execution subject may perform data combination on a center point coordinate corresponding to the coordinate information of the center point and at least one key coordinate point on the boundary of the target object connected region to generate parameter information of the target object connected region. Wherein, the boundary of the target object connected region can be generated by sequentially connecting the at least one key point coordinate.
As an example, the center point coordinates may be [1, 2 ]. The at least one keypoint coordinate may be (1, 1), (2, 3), (3, 3). The parameter information may be (2, 3), (1, 1), (3, 3).
And thirdly, generating a characteristic vector set based on the parameter information. The generated feature vector may be a vector including any two coordinates included in the parameter information.
As an example, the above-described feature vector set may be { (1, 1) → (2, 3), (2, 3) → (3, 3), (3, 3) → (1, 1) }.
And step 203, generating a space vector information set according to the characteristic vector set.
In some embodiments, the execution subject may generate a set of spatial vector information according to the set of feature vectors. The space vector information may represent a spatial structure between feature vectors in the feature vector set corresponding to the space vector information. The above-mentioned generation of the spatial vector information may be that two coordinates included in each feature vector are subjected to coordinate system conversion, converted into a world coordinate system, two spatial coordinates are generated (so that the two-dimensional coordinates become three-dimensional coordinates), and then, based on the two spatial coordinates, the spatial vector information is generated.
As an example, the above space vector information may be (1, 1) → (2, 3) → (3, 3) → (1, 1).
In some optional implementation manners of some embodiments, the generating the set of spatial vector information according to the set of feature vectors may include:
based on the feature vector set, generating a space vector information set by using the following formula:
Figure BDA0002845066300000071
wherein M istRepresenting spatial vector information. VctAnd coordinate information representing the center point of the target image connected region. c represents the central point of the target image connected region. t represents the number of frames of the above-mentioned target image in the acquired image set. α represents a frame interval value. Vc(t + α) represents coordinate information of the center point of the connected region corresponding to the image of the t + α -th frame.
Figure BDA0002845066300000072
To represent
Figure BDA0002845066300000073
To VctThe formed vector and
Figure BDA0002845066300000081
to Vc(t+α)The angle between the formed vectors.
Figure BDA0002845066300000082
Denotes the first
Figure BDA0002845066300000083
And coordinate information of the center point of the connected region corresponding to the image of the frame. M represents the above-mentioned set of space vector information. n represents space vector information included in the space vector information setThe number of the cells.
The above formula is used as an invention point of the embodiment of the present disclosure, and solves the technical problem mentioned in the background art that "the generated space vector information is not accurate enough because the influence factors of the generation of the space vector information cannot be comprehensively considered". Factors that lead to inaccuracy in the generated spatial vector information tend to be as follows: influence factors of generating space vector information cannot be comprehensively considered. If the above factors are solved, the effect of improving the accuracy of the generated space vector information can be achieved. To achieve this, the above formula introduces connected regions of the target image to preliminarily determine an approximate region of the target object. The coordinate of the central point of the connected region of the target image is introduced into the present disclosure, because the central point has the function of determining the position of the target object. Thus, the approximate region of the target object in the target image can be estimated. In order to determine the change condition of the central point of the target image connected region between different frames, dividing the difference value between the central point of the target image connected region and the central point of the target image connected region after a certain number of frames are separated by the number of the frames at the interval. Therefore, the change condition of the central point of the image connected region with different interval frame numbers is obtained. The vector included angle is introduced by considering the difference of the spatial positions of different feature vectors. Thus, the position of each spatial vector can be accurately determined. Because the influence factors of the space vector information are comprehensively considered, the accuracy of the generated space vector information is improved.
And 204, inputting the space vector information set into a pre-trained embedded neural network model to obtain the regional information of the target object.
In some embodiments, the execution subject may input the space vector information into a pre-trained embedded neural network model to obtain the region information of the target object. The pre-trained embedded neural network model can be a convolutional neural network model and a cyclic neural network model. The region information of the target object may be space vector information obtained by inputting a space vector information set to a pre-trained embedded neural network model, and the space vector information is used as the region information of the target object.
Alternatively, the executing body may transmit the area information of the target object to an associated device for the device to perform image processing.
As an example, the execution subject may send the region information of the target object to an image processor, so that the image processor performs image processing on the region information.
The above embodiments of the present disclosure have the following advantages: the regional information of the target object is obtained through the regional information generation method of some embodiments of the present disclosure, and the image processing efficiency of the computer is improved. Specifically, the reasons for the decrease in the image processing efficiency of the computer are: because the computer directly processes the image, a large amount of computer video memory space and memory space are occupied, and the image processing time of the computer is increased. Based on this, the area information generating method of some embodiments of the present disclosure, first, acquires an image of a target object as a target image. By acquiring an image including the target object, processing of the target object is thereby facilitated. Secondly, a feature vector set is generated based on the target image and a feature extraction model which is deployed in advance. Therefore, the subsequent image processing does not need to process the whole image, and the extracted feature vector can be processed. Therefore, the occupied space of the computer video memory and the memory is reduced, and the available space of the computer video memory and the memory space is further improved. And then, generating a space vector information set according to the characteristic vector set, wherein the space vector information represents the space structure among the characteristic vectors in the characteristic vector set. Thus, the obtained feature vector set can be converted into space vector information, and thus, a space structure regarding the feature vector set can be constructed. Further, spatial relationships between the respective feature vectors may be determined. And finally, inputting the space vector information set into a pre-trained embedded neural network model to obtain the region information of the target object. The feature vectors of the target image are extracted, and the region information of the target object is obtained through the pre-trained embedded neural network model. Therefore, the occupation of the video memory space and the memory space of the computer is reduced, and the image processing time of the computer is reduced. Further, the image processing efficiency of the computer is improved.
With further reference to fig. 3, as an implementation of the above-described method for the above-described figures, the present disclosure provides some embodiments of an area information generating apparatus, which correspond to those of the method embodiments described above for fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 3, the area information generating apparatus 300 of some embodiments includes: an acquisition unit 301, a first generation unit 302, a second generation unit 303, and an input unit 304. Wherein the acquisition unit 301 is configured to acquire an image of the target object as the target image. The first generation unit 302 is configured to generate a feature vector set based on the target image and a pre-deployed feature extraction model. The second generating unit 303 is configured to generate a set of space vector information from the set of feature vectors, where the space vector information represents a spatial structure between feature vectors in the set of feature vectors corresponding to the space vector information. The input unit 304 is configured to input the above-mentioned space vector information set into a pre-trained embedded neural network model, resulting in region information of the target object.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, shown is a block diagram of an electronic device (e.g., computing device 101 of FIG. 1)400 suitable for use in implementing some embodiments of the present disclosure. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM403 are connected to each other via a bus 404. An input/output (I/O) interface 404 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 404: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 4 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 409, or from the storage device 408, or from the ROM 402. The computer program, when executed by the processing apparatus 401, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an image of a target object as a target image; generating a characteristic vector set based on the target image and a pre-deployed characteristic extraction model; generating a space vector information set according to the feature vector set, wherein the space vector information represents a space structure between feature vectors corresponding to the space vector information in the feature vector set; and inputting the space vector information set into a pre-trained embedded neural network model to obtain the regional information of the target object.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first generation unit, a second generation unit, and an input unit. Here, the names of these units do not constitute a limitation of the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires an image of a target object as a target image".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A region information generating method, comprising:
acquiring an image of a target object as a target image;
generating a feature vector set based on the target image and a pre-deployed feature extraction model;
generating a space vector information set according to the feature vector set, wherein the space vector information represents a space structure between feature vectors corresponding to the space vector information in the feature vector set;
and inputting the space vector information set into a pre-trained embedded neural network model to obtain the regional information of the target object.
2. The method of claim 1, wherein the method further comprises:
and sending the area information of the target object to associated equipment for image processing of the equipment.
3. The method of claim 2, wherein the generating a set of feature vectors based on the target image and a pre-deployed feature extraction model comprises:
performing image graying processing on the target image to obtain a gray value of each pixel point in the target image and a gray value of each background pixel point in a background image corresponding to the target image;
generating a difference value based on the gray value of each pixel point in the target image and the gray value of the background pixel point corresponding to the pixel point in the background image corresponding to the target image to obtain a difference value set;
performing difference detection processing on each difference in the difference set to generate a target point, so as to obtain a target point set;
and performing aggregation processing on the target point set to generate a characteristic vector set.
4. The method of claim 3, wherein the generating a difference value based on the grayscale value of each pixel point in the target image and the grayscale value of the background pixel point corresponding to the pixel point in the background image corresponding to the target image comprises:
determining a first predetermined threshold as a difference value in response to a difference between the pixel point and the background pixel point satisfying a first predetermined condition;
and determining a second preset threshold value as a difference value in response to the difference between the pixel point and the background pixel point meeting a second preset condition.
5. The method of claim 4, wherein the feature extraction models include a radar feature extraction model, a binocular camera specific feature extraction model, and a spatial feature extraction model, the radar feature extraction model being a model used to analyze speed and direction of motion, the binocular camera specific feature extraction model being a model used to process a video stream, the spatial feature extraction model being a model used to build a spatial vector model.
6. The method of claim 5, wherein the aggregating the set of target points to generate a set of feature vectors comprises:
generating coordinate information of a central point of a target object connected region based on the target point set;
generating parameter information of the target object connected region based on the coordinate information of the target object connected region and the central point of the target object connected region;
and generating a feature vector set based on the parameter information.
7. The method of claim 6, wherein the generating a set of spatial vector information from the set of eigenvectors comprises:
generating a space vector information set based on the feature vector set by using the following formula:
Figure FDA0002845066290000021
wherein M istRepresenting space vector information, VctCoordinate information representing the center point of the target image connected region, C representing the center point of the target image connected region, t representing the number of frames of the target image in the acquired image set, alpha representing the frame number interval value, Vc(t+α)Coordinate information of the center point of the connected region corresponding to the image of the t + alpha frame,
Figure FDA0002845066290000022
to represent
Figure FDA0002845066290000023
To VctThe formed vector and
Figure FDA0002845066290000031
to Vc(t+α)The angle between the formed vectors is such that,
Figure FDA0002845066290000032
is shown as
Figure FDA0002845066290000033
And coordinate information of the center point of the connected region corresponding to the image of the frame, wherein M represents the space vector information set, and n represents the number of the space vector information included in the space vector information set.
8. An area information generating apparatus comprising:
an acquisition unit configured to acquire an image of a target object as a target image;
a first generation unit configured to generate a feature vector set based on the target image and a pre-deployed feature extraction model;
a second generating unit configured to generate a set of space vector information from the set of feature vectors, wherein the space vector information characterizes a spatial structure between feature vectors in the set of feature vectors corresponding to the space vector information;
and the input unit is configured to input the space vector information set into a pre-trained embedded neural network model to obtain the region information of the target object.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202011506435.5A 2020-12-18 2020-12-18 Region information generation method and device, electronic equipment and computer readable medium Pending CN114723933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011506435.5A CN114723933A (en) 2020-12-18 2020-12-18 Region information generation method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011506435.5A CN114723933A (en) 2020-12-18 2020-12-18 Region information generation method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN114723933A true CN114723933A (en) 2022-07-08

Family

ID=82230002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011506435.5A Pending CN114723933A (en) 2020-12-18 2020-12-18 Region information generation method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN114723933A (en)

Similar Documents

Publication Publication Date Title
CN109816589B (en) Method and apparatus for generating cartoon style conversion model
CN109800732B (en) Method and device for generating cartoon head portrait generation model
US20230394669A1 (en) Point cloud segmentation method and apparatus, device, and storage medium
CN110059623B (en) Method and apparatus for generating information
CN111414879A (en) Face shielding degree identification method and device, electronic equipment and readable storage medium
CN110211195B (en) Method, device, electronic equipment and computer-readable storage medium for generating image set
CN111915480A (en) Method, apparatus, device and computer readable medium for generating feature extraction network
CN112084959B (en) Crowd image processing method and device
CN113689372A (en) Image processing method, apparatus, storage medium, and program product
CN110097004B (en) Facial expression recognition method and device
CN115731341A (en) Three-dimensional human head reconstruction method, device, equipment and medium
CN114399814A (en) Deep learning-based obstruction removal and three-dimensional reconstruction method
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
CN111586295B (en) Image generation method and device and electronic equipment
CN111797822A (en) Character object evaluation method and device and electronic equipment
CN114723933A (en) Region information generation method and device, electronic equipment and computer readable medium
CN114120423A (en) Face image detection method and device, electronic equipment and computer readable medium
CN112070022A (en) Face image recognition method and device, electronic equipment and computer readable medium
CN112418233A (en) Image processing method, image processing device, readable medium and electronic equipment
CN112070034A (en) Image recognition method and device, electronic equipment and computer readable medium
CN112215853A (en) Image segmentation method and device, electronic equipment and computer readable medium
CN111797931A (en) Image processing method, image processing network training method, device and equipment
CN110991312A (en) Method, apparatus, electronic device, and medium for generating detection information
CN111914861A (en) Target detection method and device
CN113378808B (en) Person image recognition method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination