CN110472599A - Number of objects determines method, apparatus, storage medium and electronic equipment - Google Patents

Number of objects determines method, apparatus, storage medium and electronic equipment Download PDF

Info

Publication number
CN110472599A
CN110472599A CN201910769944.8A CN201910769944A CN110472599A CN 110472599 A CN110472599 A CN 110472599A CN 201910769944 A CN201910769944 A CN 201910769944A CN 110472599 A CN110472599 A CN 110472599A
Authority
CN
China
Prior art keywords
image
processed
numerical value
preset threshold
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910769944.8A
Other languages
Chinese (zh)
Other versions
CN110472599B (en
Inventor
郁昌存
王德鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Jingdong Technology Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN201910769944.8A priority Critical patent/CN110472599B/en
Publication of CN110472599A publication Critical patent/CN110472599A/en
Priority to PCT/CN2020/108677 priority patent/WO2021031954A1/en
Application granted granted Critical
Publication of CN110472599B publication Critical patent/CN110472599B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

Present disclose provides a kind of number of objects to determine method, number of objects determining device, computer readable storage medium and electronic equipment, belongs to technical field of computer vision.This method comprises: being identified to the object in image to be processed, using the quantity of the object recognized as the first numerical value;Compare first numerical value and preset threshold;If first numerical value is less than the preset threshold, the quantity of object described in the image to be processed is determined as first numerical value;If first numerical value is greater than the preset threshold, Density Detection is carried out to the object in the image to be processed, obtains the second value of the quantity about the object, and the quantity of object described in the image to be processed is determined as the second value.The disclosure can accurately determine number of objects, and applicability with higher in the case where object distribution is intensive.

Description

Number of objects determines method, apparatus, storage medium and electronic equipment
Technical field
This disclosure relates to which technical field of computer vision more particularly to a kind of number of objects determine that method, number of objects are true Determine device, computer readable storage medium and electronic equipment.
Background technique
In in many occasions, the quantity for counting certain object, such as tourist's quantity at statistics scenic spot, statistics parking are required The vehicle fleet size etc. of field.
Traditional method is that the entrance in target area counts the number of objects flowed in and out, such as scenic spot entrance Gate or infrared sensing equipment are set, and barrier equipment etc. is arranged in parking lot entrance, but this method can not count open zone The number of objects in domain, such as tourist's quantity at open scenic spot, the vehicle fleet size etc. in street, and can only count in target area Object sum, can not determine the distribution situation of object.
With the development of deep learning and computer vision, occur determining number of objects based on monitoring image in the prior art The method of amount, by taking the tourist's quantity for counting scenic spot as an example, monitoring camera, captured in real-time scenic spot is arranged in the different location at scenic spot Image identifies tourist from image, to count tourist's quantity.Compared to above-mentioned conventional method, the prior art has obviously It improves, can be applied to open area, and being capable of object distribution situation in statistical regions;However there is also certain to ask Topic, object densities it is larger be especially in the presence of block in the case where, such as scenic spot, the upper and lower regular bus of festivals or holidays rush hour Flow the street section etc. of peak period, the accuracy of the prior art is lower, identified number of objects and actual quantity difference compared with Greatly, usually less than actual quantity, to limit its application.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
Present disclose provides a kind of number of objects to determine method, number of objects determining device, computer readable storage medium With electronic equipment, and then improve the prior art at least to a certain extent when object densities are larger, determines the standard of number of objects The lower problem of exactness.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure Practice and acquistion.
According to the disclosure in a first aspect, providing a kind of number of objects determines method, comprising: to pair in image to be processed As being identified, using the quantity of the object recognized as the first numerical value;Compare first numerical value and preset threshold;Such as First numerical value described in fruit is less than the preset threshold, then the quantity of object described in the image to be processed is determined as described the One numerical value;If first numerical value is greater than the preset threshold, density inspection is carried out to the object in the image to be processed It surveys, obtains the second value about the number of objects, and the quantity of object described in the image to be processed is determined as institute State second value.
In a kind of exemplary embodiment of the disclosure, the method also includes: target image is obtained, by the target figure As being divided into multiple regions, respectively using the image in each region as the image to be processed.
In a kind of exemplary embodiment of the disclosure, each region has corresponding preset threshold.
In a kind of exemplary embodiment of the disclosure, the object in image to be processed is identified, comprising: logical The object in the image to be processed is identified after first nerves network model trained in advance.
In a kind of exemplary embodiment of the disclosure, the first nerves network model includes YOLO model (You Only Look Once, a kind of algorithm frame of real-time target detection, multiple versions such as including v1, v2, v3, the disclosure can adopt With any one version).
In a kind of exemplary embodiment of the disclosure, the object in the image to be processed carries out density inspection It surveys, comprising: Density Detection is carried out to the object in the image to be processed by nervus opticus network model trained in advance.
In a kind of exemplary embodiment of the disclosure, the nervus opticus network model includes: the first branching networks, is used In carrying out the first process of convolution to the image to be processed, fisrt feature image is obtained;Second branching networks, for it is described to It handles image and carries out the second process of convolution, obtain second feature image;Third branching networks, for the image to be processed into Row third process of convolution, obtains third feature image;Merge layer, for by the fisrt feature image, second feature image and Third feature image merges into final characteristic image;Output layer, for the final characteristic image to be mapped as density image.
According to the second aspect of the disclosure, a kind of number of objects determining device is provided, comprising: identification module, for treating Object in processing image is identified, using the quantity of the object recognized as the first numerical value;Comparison module, for than First numerical value and preset threshold;First determining module, if being less than the preset threshold for first numerical value, The quantity of object described in the image to be processed is determined as first numerical value;Second determining module, if for described First numerical value is greater than the preset threshold, then carries out Density Detection to the object in the image to be processed, obtain about described The second value of number of objects, and the quantity of object described in the image to be processed is determined as the second value.
In a kind of exemplary embodiment of the disclosure, described device further include: module is obtained, for obtaining target figure The target image is divided into multiple regions by picture, respectively using the image in each region as the image to be processed.
In a kind of exemplary embodiment of the disclosure, each region has corresponding preset threshold.
In a kind of exemplary embodiment of the disclosure, the identification module, for passing through first nerves trained in advance Network model identifies the object in the image to be processed.
In a kind of exemplary embodiment of the disclosure, the first nerves network model includes YOLO model.
In a kind of exemplary embodiment of the disclosure, second determining module includes: Density Detection unit, for leading to Density Detection is carried out to the object in the image to be processed after nervus opticus network model trained in advance.
In a kind of exemplary embodiment of the disclosure, the nervus opticus network model includes: the first branching networks, is used In carrying out the first process of convolution to the image to be processed, fisrt feature image is obtained;Second branching networks, for it is described to It handles image and carries out the second process of convolution, obtain second feature image;Third branching networks, for the image to be processed into Row third process of convolution, obtains third feature image;Merge layer, for by the fisrt feature image, second feature image and Third feature image merges into final characteristic image;Output layer, for the final characteristic image to be mapped as density image.
According to the third aspect of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with, The computer program realizes method described in above-mentioned any one when being executed by processor.
According to the fourth aspect of the disclosure, a kind of electronic equipment is provided, comprising: processor;And memory, for storing The executable instruction of the processor;Wherein, the processor is configured to above-mentioned to execute via the executable instruction is executed Method described in any one.
The exemplary embodiment of the disclosure has the advantages that
Object in image to be processed is identified, is closed according to the size of the first obtained numerical value of identification and preset threshold System judges the object in image for sparse or intensive situation, so that it is determined that being using the first numerical value as final result, also It is the second value that is obtained using Density Detection as final result.On the one hand, if the first numerical value is greater than preset threshold, scheme As in object it is intensive, it is understood that there may be the case where blocking, at this time by the way of Density Detection, using obtained second value as Final structure can accurately determine number of objects, so that present exemplary embodiment accuracy with higher.Another party Face, using the combination of Object identifying and Density Detection two ways, flexibility with higher can by adjusting preset threshold So that the present exemplary embodiment is applied to a variety of different scenes, applicability with higher.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1 shows the flow chart that a kind of number of objects in the present exemplary embodiment determines method;
Fig. 2 shows monitoring scenic spot images to be processed;
Fig. 3 shows the effect of visualization figure that tourist's identification is carried out to monitoring scenic spot image;
Fig. 4 shows a kind of structure chart of neural network model in the present exemplary embodiment;
Fig. 5 shows the schematic diagram for dividing region in the present exemplary embodiment to target image;
Fig. 6 shows the flow chart that another number of objects in the present exemplary embodiment determines method;
Fig. 7 shows a kind of structural block diagram of number of objects determining device in the present exemplary embodiment;
Fig. 8 shows a kind of computer readable storage medium for realizing the above method in the present exemplary embodiment;
Fig. 9 shows a kind of electronic equipment for realizing the above method in the present exemplary embodiment.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.
The exemplary embodiment of the disclosure provides firstly a kind of method that number of objects is determined in image, and this method is answered Include but is not limited to scene: the number in the regions such as statistics scenic spot, market;Count the vehicle in the regions such as parking lot, street Number;Monitor the ship number in the regions such as harbour, harbour;Monitor the livestock number in animal farm.Below to count the field of scenic spot number It is illustrated for scape, method content is equally applicable for other scenes.
Fig. 1 shows the method flow of the present exemplary embodiment, may include step S110~S140:
Step S110 identifies the object in image to be processed, using the quantity of the object recognized as the first number Value.
Wherein, image to be processed can be the monitoring image or GIS image (Geographic Information at scenic spot System, GIS-Geographic Information System, GIS image include the satellite view of earth's surface, population thermodynamic chart etc.) etc..Such as: it is counted by backstage The video flowing of monitoring camera in calculation machine or server pull scenic spot, IP Camera can all provide rtmp (Real Time at present Messaging Protocol, real-time messages transport protocol), http (Hyper Text Transfer Protocol, hypertext Transport protocol) etc. agreements video flowing, can (Open Source Computer Vision Library, be opened by OpenCV Source computer vision library) Online Video stream is pulled, real-time video frame is obtained, using single-frame images therein as figure to be processed Picture, the single frames monitoring image at certain scenic spot as shown in Figure 2.
After obtaining image to be processed, object therein can be identified.In one exemplary embodiment, it can adopt With depth learning technology, the object in image to be processed is identified by first nerves network model trained in advance.Example It, can be by the intensive pedestrian detection data set of open source to YOLO model if first nerves network model can use YOLO model It is trained, manually the picture in application scenarios can also be labeled to obtain data set (such as from a large amount of monitoring scenic spots Tourist is marked out in image) and be trained.YOLO model is input with monitoring scenic spot image, with the packet of tourists all in image Box (Bounding Box) information is enclosed for output, such as Fig. 2 is input in YOLO model, and the effect of visualization of output can be with Refering to what is shown in Fig. 3, YOLO model identifies the tourist in image, finally actually obtain the bounding box of each tourist (x, Y, w, h), x and y indicate that the position coordinates of the center of bounding box in the picture, w and h indicate the width and height of bounding box.In addition, the One neural network model can also be using R-CNN (Region-Convolutional Neural Network, region convolution mind Through the modified versions such as network or Fast R-CNN, Faster R-CNN), SSD (Single Shot MultiBox Detector, the more frame target detections of single step) etc. other target detections algorithm model.It in one exemplary embodiment, can also be with The detection object profile from image to be processed, the object identification by chamfered shape close to object shapes are object.
In the present exemplary embodiment, the number of objects identified from image to be processed is the first numerical value.
Step S120 compares the first numerical value and preset threshold.
Usually in the case where objects in images to be processed is less, each object is more complete in the picture, readily identified, Therefore for obtained the first numerical value of step S110 close to the exact amount of object, i.e. the confidence level of the first numerical value is higher;In object In the case where more, it is understood that there may be several objects are blocked, or the lower problem of image resolution ratio of single object, so that right As being difficult to identify, then the confidence level of the first numerical value is lower.As shown in above-mentioned Fig. 2 and Fig. 3, when tourist is more in scenic spot, pass through First nerves network model identifies the tourist in monitoring image, the case where there are more missing inspections for the intensive central area of tourist.
In the present exemplary embodiment, whether the first numerical value is determined by comparing the relative size of the first numerical value and preset threshold Credible, if the first numerical value is less than preset threshold, objects in images to be processed is relatively sparse, and the first numerical value is credible;Conversely, then Objects in images to be processed is relatively intensive, and the first numerical value is insincere.Wherein, preset threshold can rule of thumb, image to be processed Size relationship of corresponding provincial characteristics, image to be processed and object etc. determines that the disclosure is not specially limited this.
The quantity of objects in images to be processed is determined as the if the first numerical value is less than preset threshold by step S130 One numerical value.
From the foregoing, it will be observed that the first numerical value is credible when meeting the condition of step S130, therefore can be as image to be processed The quantity of middle object exports the result.
Step S140 carries out Density Detection to the object in image to be processed if the first numerical value is greater than preset threshold, The second value about number of objects is obtained, and the quantity of objects in images to be processed is determined as second value.
When meeting the condition of step S140, the first numerical value is insincere, then can be using the another kind side other than Object identifying Formula is handled, i.e. the mode of Density Detection, with the quantity of determination objects in images to be processed.Density Detection and Object identifying are not Together, mainly to there are the probability of object to return in region (or each pixel) each in image to be processed, with statistics Mode obtain the quantity of objects in images to be processed, be above-mentioned second value.In the case where object is more, especially it is distributed Intensively, exist in the case where blocking, Density Detection has higher confidence level than Object identifying, therefore second value can be made For the quantity of objects in images to be processed, the result is exported.
You need to add is that the case where being equal to preset threshold for the first numerical value, can be regarded as meeting step S130 item The special circumstances of part also can be considered the special circumstances for meeting step S140 condition, thereby executing step S130 or S140, the disclosure This is not specially limited.
It in one exemplary embodiment, can be by nervus opticus network model trained in advance in image to be processed Object carries out Density Detection.Such as nervus opticus network model can use MCNN model (Multi-column Convolutional Neural Network, multiple row convolutional neural networks), Fig. 4 shows a kind of knot of MCNN model 400 Structure may include: input layer 410, for inputting image to be processed;First branching networks 420, for being carried out to image to be processed First process of convolution obtains fisrt feature image;Second branching networks, for carrying out the second process of convolution to image to be processed, Obtain second feature image;Third branching networks obtain third feature figure for carrying out third process of convolution to image to be processed Picture;Merge layer, for fisrt feature image, second feature image and third feature image to be merged into final characteristic image;It is defeated Layer out, for final characteristic image to be mapped as density image.Wherein, the first process of convolution, the second process of convolution and third volume Product processing respectively includes a series of operation such as convolution, pond, in the first, second, and third process of convolution, used parameter (such as convolution kernel size, pond parameter) is different, is equivalent to the feature for extracting image to be processed from different scale, respectively obtains First, second, and third characteristic image;It is then combined with as final characteristic image, then be mapped as density by modes such as 1*1 convolution Image, in density image, the numerical value of each point represents the point as the probability of object, the numerical value of all the points is added up to get arriving Indicate the second value of objects in images quantity to be processed.
The training of MCNN model can be the coordinate of each number of people based on open source data set, image labeling, and use is several Number of people coordinate is converted to probability density image by what adaptive Gauss core, and the sum of probability of everyone head region is 1.Using initial Image is sample, and the probability density image after conversion is label (ground truth), can be trained to model.
It should be appreciated that nervus opticus network model can also be using the network of color density detection, such as the variant of MCNN Form increases the 4th branching networks etc., or increase in the first, second or third branching networks on the basis of Fig. 4 structure Middle layer, or increase one or more full articulamentums etc., the disclosure is not specially limited this.
Based on above description, the present exemplary embodiment identifies the object in image to be processed, is obtained according to identification The first numerical value and preset threshold size relation, judge the object in image for sparse or intensive situation, so that it is determined that It is using the first numerical value as final result, or the second value obtained using Density Detection is as final result.On the one hand, If the first numerical value is greater than preset threshold, the object in image is intensive, it is understood that there may be the case where blocking is examined using density at this time The mode of survey can accurately determine number of objects using obtained second value as final structure, so that this is exemplary Embodiment accuracy with higher.On the other hand, using the combination of Object identifying and Density Detection two ways, have higher Flexibility the present exemplary embodiment can be made to be applied to a variety of different scenes by adjusting preset threshold, it is with higher Applicability.
In one exemplary embodiment, target image can be divided into multiple regions after obtaining target image, respectively Using the image in each region as image to be processed.Wherein, target image is the complete image it needs to be determined that number of objects, such as is schemed The original monitoring image at scenic spot shown in 2, since held angle is higher, coverage is wider, and captured image will Part fixed subject, sky etc. are included, and generate more disturbing factor, and certain interference is generated to tourist's quantity survey, And tourist's distribution of different zones can be handled pointedly respectively there is also intensive and sparse difference.In consideration of it, with reference to Shown in Fig. 5, Fig. 2 can be divided by multiple regions according to priori, execute the method stream of Fig. 1 respectively to each area image The number of objects in each region is finally added by journey, obtains the object sum in target image.
In Fig. 5, region one can not have tourist, it is possible to which tourist's quantity in region one is set to 0 always.Region Two and region three in tourist it is relatively sparse, fixed subject accounting is larger, therefore tourist can be identified by the way of Object identifying, Statistical magnitude.Region is fourth is that the region that tourist mainly concentrates, and more intensively, and there are more serious circumstance of occlusion, Object identifyings It is poor for the effect in the region, therefore tourist's quantity can be counted by the way of Density Detection.
In addition to overseas according to priori dividing regions, other modes can also be used, several exemplary approach are provided below, But following manner should not cause to limit to the protection scope of the disclosure:
(1) region is divided according to the feature of object distribution in target image: Object identifying is carried out to target image first, is obtained To the approximate location of each object;Then the more intensive part of object distribution is substantially chosen, is apart more than between two objects Boundary line is marked in the position of certain distance, obtains a region, calculate the region object densities (number of objects in the region/ The image area in the region);The region gradually is extended to all directions again, if object densities increase after extension, with extension Region afterwards replaces the region before extension, if object densities reduce, the region before restoring extension;Until object densities reach Maximum determines that the region is the region delimited.Fixed region is divided away from target image, then in remainder Divide and repeat the above process, is finally completed region division.
(2) it is suitable for monitoring image, the constant situation of scene areas captured by camera.One is transferred from monitoring image The representational history image of fixed number amount, such as in the past one week monitoring image, two o'clock every afternoon is chosen to 3 points Between (tourist's peak period in scenic spot) several frame images, divide an image into several tiny grids, calculate each grid Tourist's probability of occurrence (occurring the history image sum of the history image quantity of tourist/selected in the grid), obtain probability Figure, according to probability distribution, is connected as a region for grid similar in probability, to divide an image into multiple regions. Captured monitoring image all uses the result of the region division later.
After target area is divided into multiple regions, Fig. 1 is executed to each area image method, wherein for each area For domain, used preset threshold be may be the same or different, i.e., each region can have unified preset threshold, Corresponding preset threshold can be respectively provided with.Such as: in figure 5 it is possible to which lesser preset threshold, area is arranged for region two and three Biggish preset threshold is arranged in domain four.The preset threshold in each region can be empirically determined, can also be according to characteristics of image meter It obtains, such as: the parts of images area that tourist is likely to occur in each region is calculated, divided by image surface shared by each tourist Product, estimate each region in by tourist fill up and be not present block when tourist quantity, can using the quantity as preset threshold, or Empirical coefficient (such as 0.9) of the person in the quantity multiplied by one less than 1 as preset threshold etc., do not do this especially by the disclosure It limits.To each region using targetedly preset threshold, it can more be accurately obtained the sum of object in target image.
Fig. 6 shows another process of the present exemplary embodiment, comprising: step S601 obtains target image, such as can To be monitoring image;Step S602 divides multiple regions to target image;Step S603 is wait locate with the image in each region Image is managed, step S604~S608: step S604 is executed respectively, by Object identifying, detects the number of objects in image to be processed Amount is the first numerical value;Step S605 judges the size of the first numerical value and preset threshold;Step S606, if the first numerical value is less than Preset threshold, it is determined that the number of objects in the region is the first numerical value;Step S607, if the first numerical value is greater than default threshold Value, then the first numerical value is insincere, also needs to carry out object densities detection to image to be processed, obtains second value;Step S608, really Number of objects in the fixed region is second value;Based on the above process, the number of objects in available each region is finally held Row step S609, the number of objects in each region of adding up obtain the object sum in target image, so that target figure finally be determined Number of objects as in.
The exemplary embodiment of the disclosure additionally provides a kind of number of objects determining device, as shown in fig. 7, the device 700 It may include: identification module 710, for being identified to the object in image to be processed, the quantity of the object recognized made For the first numerical value;Comparison module 720, for comparing the first numerical value and preset threshold;First determining module 730, if for the One numerical value is less than preset threshold, then the quantity of objects in images to be processed is determined as the first numerical value;Second determining module 740, If being greater than preset threshold for the first numerical value, Density Detection is carried out to the object in image to be processed, is obtained about object The second value of quantity, and the quantity of objects in images to be processed is determined as second value.
In one exemplary embodiment, number of objects determining device 700 can also include: and obtain module (not show in figure Out), for obtaining target image, target image is divided into multiple regions, respectively using the image in each region as figure to be processed Picture.
In one exemplary embodiment, above-mentioned each region has corresponding preset threshold.
In one exemplary embodiment, identification module 710 can be used for through first nerves network model trained in advance Object in image to be processed is identified.
In one exemplary embodiment, first nerves network model can be YOLO model.
In one exemplary embodiment, the second determining module 740 may include: Density Detection unit (not shown), For carrying out Density Detection to the object in image to be processed by nervus opticus network model trained in advance.
In one exemplary embodiment, nervus opticus network model may include: the first branching networks, for to be processed Image carries out the first process of convolution, obtains fisrt feature image;Second branching networks, for carrying out volume Two to image to be processed Product processing, obtains second feature image;Third branching networks obtain for carrying out third process of convolution to image to be processed Three characteristic images;Merge layer, for fisrt feature image, second feature image and third feature image to be merged into final feature Image;Output layer, for final characteristic image to be mapped as density image.
Undisclosed solution details content may refer to the embodiment content of method part in above-mentioned apparatus, thus no longer superfluous It states.
Person of ordinary skill in the field it is understood that various aspects of the disclosure can be implemented as system, method or Program product.Therefore, various aspects of the disclosure can be with specific implementation is as follows, it may be assumed that complete hardware embodiment, complete The embodiment combined in terms of full Software Implementation (including firmware, microcode etc.) or hardware and software, can unite here Referred to as circuit, " module " or " system ".
The exemplary embodiment of the disclosure additionally provides a kind of computer readable storage medium, and being stored thereon with can be realized The program product of this specification above method.In some possible embodiments, various aspects of the disclosure can also be realized For a kind of form of program product comprising program code, when program product is run on the terminal device, program code is used for Execute terminal device described in above-mentioned " illustrative methods " part of this specification according to the various exemplary embodiment party of the disclosure The step of formula.
It is produced refering to what is shown in Fig. 8, describing the program according to the exemplary embodiment of the disclosure for realizing the above method Product 800, can be using portable compact disc read only memory (CD-ROM) and including program code, and can set in terminal It is standby, such as run on PC.However, the program product of the disclosure is without being limited thereto, in this document, readable storage medium storing program for executing can With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or Person is in connection.
Program product can be using any combination of one or more readable mediums.Readable medium can be readable signal Jie Matter or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or partly lead System, device or the device of body, or any above combination.More specific example (the non exhaustive column of readable storage medium storing program for executing Table) it include: the electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only storage Device (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD- ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetic signal, Optical signal or above-mentioned any appropriate combination.Readable signal medium can also be any readable Jie other than readable storage medium storing program for executing Matter, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or and its The program of combined use.
The program code for including on readable medium can transmit with any suitable medium, including but not limited to wirelessly, have Line, optical cable, RF etc. or above-mentioned any appropriate combination.
Can with any combination of one or more programming languages come write for execute the disclosure operation program Code, programming language include object oriented program language-Java, C++ etc., further include conventional process Formula programming language-such as " C " language or similar programming language.Program code can be calculated fully in user It executes in equipment, partly execute on a user device, executing, as an independent software package partially in user calculating equipment Upper part executes on a remote computing or executes in remote computing device or server completely.It is being related to remotely counting In the situation for calculating equipment, remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP To be connected by internet).
The exemplary embodiment of the disclosure additionally provides a kind of electronic equipment that can be realized the above method.Referring to figure 9 describe the electronic equipment 900 of this exemplary embodiment according to the disclosure.The electronic equipment 900 that Fig. 9 is shown is only one A example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 9, electronic equipment 900 can be showed in the form of universal computing device.The component of electronic equipment 900 can To include but is not limited to: at least one above-mentioned processing unit 910, connects not homologous ray group at least one above-mentioned storage unit 920 The bus 930 and display unit 940 of part (including storage unit 920 and processing unit 910).
Storage unit 920 is stored with program code, and program code can be executed with unit 910 processed, so that processing unit 910 execute the step described in above-mentioned " illustrative methods " part of this specification according to the various illustrative embodiments of the disclosure Suddenly.For example, processing unit 910 can execute Fig. 4 or method and step shown in fig. 5 etc..
Storage unit 920 may include the readable medium of volatile memory cell form, such as Random Access Storage Unit (RAM) 921 and/or cache memory unit 922, it can further include read-only memory unit (ROM) 923.
Storage unit 920 can also include program/utility 924 with one group of (at least one) program module 925, Such program module 925 includes but is not limited to: operating system, one or more application program, other program modules and It may include the realization of network environment in program data, each of these examples or certain combination.
Bus 930 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures Local bus.
Electronic equipment 900 can also be with one or more external equipments 1000 (such as keyboard, sensing equipment, bluetooth equipment Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 900 communicate, and/or with make Any equipment (such as the router, modulation /demodulation that the electronic equipment 900 can be communicated with one or more of the other calculating equipment Device etc.) communication.This communication can be carried out by input/output (I/O) interface 950.Also, electronic equipment 900 can be with By network adapter 960 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, Such as internet) communication.As shown, network adapter 960 is communicated by bus 930 with other modules of electronic equipment 900. It should be understood that although not shown in the drawings, other hardware and/or software module can not used in conjunction with electronic equipment 900, including but not Be limited to: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and Data backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, terminal installation or network equipment etc.) is executed according to the exemplary implementation of the disclosure The method of example.
In addition, above-mentioned attached drawing is only the schematic theory of the processing according to included by the method for disclosure exemplary embodiment It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description Member, but this division is not enforceable.In fact, according to an exemplary embodiment of the present disclosure, above-described two or More multimode or the feature and function of unit can embody in a module or unit.Conversely, above-described one A module or the feature and function of unit can be to be embodied by multiple modules or unit with further division.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure His embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Adaptive change follow the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure or Conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by claim It points out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the attached claims.

Claims (10)

1. a kind of number of objects determines method characterized by comprising
Object in image to be processed is identified, using the quantity of the object recognized as the first numerical value;
Compare first numerical value and preset threshold;
If first numerical value is less than the preset threshold, the quantity of object described in the image to be processed is determined as First numerical value;
If first numerical value is greater than the preset threshold, Density Detection is carried out to the object in the image to be processed, The second value about the number of objects is obtained, and the quantity of object described in the image to be processed is determined as described Two numerical value.
2. the method according to claim 1, wherein the method also includes:
Target image is obtained, the target image is divided into multiple regions, respectively using the image in each region described in Image to be processed.
3. according to the method described in claim 2, it is characterized in that, each region has corresponding preset threshold.
4. being wrapped the method according to claim 1, wherein the object in image to be processed identifies It includes:
The object in the image to be processed is identified by first nerves network model trained in advance.
5. according to the method described in claim 4, it is characterized in that, the first nerves network model includes YOLO model.
6. the method according to claim 1, wherein the object in the image to be processed carries out density Detection, comprising:
Density Detection is carried out to the object in the image to be processed by nervus opticus network model trained in advance.
7. according to the method described in claim 6, it is characterized in that, the nervus opticus network model includes:
First branching networks obtain fisrt feature image for carrying out the first process of convolution to the image to be processed;
Second branching networks obtain second feature image for carrying out the second process of convolution to the image to be processed;
Third branching networks obtain third feature image for carrying out third process of convolution to the image to be processed;
Merge layer, for the fisrt feature image, second feature image and third feature image to be merged into final characteristic pattern Picture;
Output layer, for the final characteristic image to be mapped as density image.
8. a kind of number of objects determining device characterized by comprising
Identification module, for being identified to the object in image to be processed, using the quantity of the object recognized as One numerical value;
Comparison module is used for first numerical value and preset threshold;
First determining module, if being less than the preset threshold for first numerical value, by institute in the image to be processed The quantity for stating object is determined as first numerical value;
Second determining module, if being greater than the preset threshold for first numerical value, in the image to be processed Object carries out Density Detection, obtains the second value about the number of objects, and by object described in the image to be processed Quantity be determined as the second value.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt Claim 1-7 described in any item methods are realized when processor executes.
10. a kind of electronic equipment characterized by comprising
Processor;And
Memory, for storing the executable instruction of the processor;
Wherein, the processor is configured to require 1-7 described in any item via executing the executable instruction and carry out perform claim Method.
CN201910769944.8A 2019-08-20 2019-08-20 Object quantity determination method and device, storage medium and electronic equipment Active CN110472599B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910769944.8A CN110472599B (en) 2019-08-20 2019-08-20 Object quantity determination method and device, storage medium and electronic equipment
PCT/CN2020/108677 WO2021031954A1 (en) 2019-08-20 2020-08-12 Object quantity determination method and apparatus, and storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910769944.8A CN110472599B (en) 2019-08-20 2019-08-20 Object quantity determination method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110472599A true CN110472599A (en) 2019-11-19
CN110472599B CN110472599B (en) 2021-09-03

Family

ID=68512644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910769944.8A Active CN110472599B (en) 2019-08-20 2019-08-20 Object quantity determination method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN110472599B (en)
WO (1) WO2021031954A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021031954A1 (en) * 2019-08-20 2021-02-25 北京海益同展信息科技有限公司 Object quantity determination method and apparatus, and storage medium and electronic device

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283499B (en) * 2021-05-24 2022-09-13 南京航空航天大学 Three-dimensional woven fabric weaving density detection method based on deep learning
CN113486732A (en) * 2021-06-17 2021-10-08 普联国际有限公司 Crowd density estimation method, device, equipment and storage medium
CN113807260B (en) * 2021-09-17 2022-07-12 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium
CN114785943B (en) * 2022-03-31 2024-03-05 联想(北京)有限公司 Data determination method, device and computer readable storage medium
CN115384796A (en) * 2022-04-01 2022-11-25 中国民用航空飞行学院 Airport management system capable of increasing passenger transfer efficiency

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1089214A2 (en) * 1999-09-30 2001-04-04 Matsushita Electric Industrial Co., Ltd. Apparatus and method for image recognition
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN102831613A (en) * 2012-08-29 2012-12-19 武汉大学 Parallel fractural network evolution image segmentation method
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN108009477A (en) * 2017-11-10 2018-05-08 东软集团股份有限公司 Stream of people's quantity detection method, device, storage medium and the electronic equipment of image
CN108875587A (en) * 2018-05-24 2018-11-23 北京飞搜科技有限公司 Target distribution detection method and equipment
CN109224442A (en) * 2018-09-03 2019-01-18 腾讯科技(深圳)有限公司 Data processing method, device and the storage medium of virtual scene
CN109389589A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Method and apparatus for statistical number of person
CN109815868A (en) * 2019-01-15 2019-05-28 腾讯科技(深圳)有限公司 A kind of image object detection method, device and storage medium
CN110008783A (en) * 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011153114A2 (en) * 2010-05-31 2011-12-08 Central Signal, Llc Train detection
CN106845344B (en) * 2016-12-15 2019-10-25 重庆凯泽科技股份有限公司 Demographics' method and device
CN108399388A (en) * 2018-02-28 2018-08-14 福州大学 A kind of middle-high density crowd quantity statistics method
CN110472599B (en) * 2019-08-20 2021-09-03 北京海益同展信息科技有限公司 Object quantity determination method and device, storage medium and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1089214A2 (en) * 1999-09-30 2001-04-04 Matsushita Electric Industrial Co., Ltd. Apparatus and method for image recognition
CN101320427A (en) * 2008-07-01 2008-12-10 北京中星微电子有限公司 Video monitoring method and system with auxiliary objective monitoring function
CN102831613A (en) * 2012-08-29 2012-12-19 武汉大学 Parallel fractural network evolution image segmentation method
CN107093171A (en) * 2016-02-18 2017-08-25 腾讯科技(深圳)有限公司 A kind of image processing method and device, system
CN108009477A (en) * 2017-11-10 2018-05-08 东软集团股份有限公司 Stream of people's quantity detection method, device, storage medium and the electronic equipment of image
CN110008783A (en) * 2018-01-04 2019-07-12 杭州海康威视数字技术股份有限公司 Human face in-vivo detection method, device and electronic equipment based on neural network model
CN108875587A (en) * 2018-05-24 2018-11-23 北京飞搜科技有限公司 Target distribution detection method and equipment
CN109224442A (en) * 2018-09-03 2019-01-18 腾讯科技(深圳)有限公司 Data processing method, device and the storage medium of virtual scene
CN109389589A (en) * 2018-09-28 2019-02-26 百度在线网络技术(北京)有限公司 Method and apparatus for statistical number of person
CN109815868A (en) * 2019-01-15 2019-05-28 腾讯科技(深圳)有限公司 A kind of image object detection method, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021031954A1 (en) * 2019-08-20 2021-02-25 北京海益同展信息科技有限公司 Object quantity determination method and apparatus, and storage medium and electronic device

Also Published As

Publication number Publication date
WO2021031954A1 (en) 2021-02-25
CN110472599B (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN110472599A (en) Number of objects determines method, apparatus, storage medium and electronic equipment
US10650236B2 (en) Road detecting method and apparatus
CN114902294B (en) Fine-grained visual recognition in mobile augmented reality
CN113657390B (en) Training method of text detection model and text detection method, device and equipment
CN105051754B (en) Method and apparatus for detecting people by monitoring system
US11182611B2 (en) Fire detection via remote sensing and mobile sensors
US20160350599A1 (en) Video camera scene translation
EP3951741B1 (en) Method for acquiring traffic state, relevant apparatus, roadside device and cloud control platform
CN114758337B (en) Semantic instance reconstruction method, device, equipment and medium
KR102387357B1 (en) A method and apparatus for detecting an object in an image by matching a bounding box on a space-time basis
CN110263714A (en) Method for detecting lane lines, device, electronic equipment and storage medium
Murray Evolving location analytics for service coverage modeling
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113780270A (en) Target detection method and device
CN110795975A (en) Face false detection optimization method and device
CN115294268A (en) Three-dimensional model reconstruction method of object and electronic equipment
US20170045619A1 (en) Method and apparatus to recover scene data using re-sampling compressive sensing
Xia et al. Computer vision based first floor elevation estimation from mobile LiDAR data
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN114663980B (en) Behavior recognition method, and deep learning model training method and device
CN116229247A (en) Indoor scene semantic segmentation method, device, equipment and medium
KR101326644B1 (en) Full-body joint image tracking method using evolutionary exemplar-based particle filter
CN113554882A (en) Method, apparatus, device and storage medium for outputting information
US10331928B2 (en) Low-computation barcode detector for egocentric product recognition
US10163006B2 (en) Selection determination for freehand marks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.