Real-Time Recognition of Automotive Vehicles Using

Скачать 75.03 Kb.
НазваниеReal-Time Recognition of Automotive Vehicles Using
Дата конвертации16.02.2013
Размер75.03 Kb.

Real-Time Recognition of Automotive Vehicles Using

Advanced Imaging Technology



In today’s world of terrorist threats and amber alerts the ability to easily find and track the movements of specific makes and models of automotive vehicles has become very important. The increasing use of video cameras to monitor traffic by human operators is limited in its scope and performance. If the make, model and year of a vehicle could be automatically recognized from real-time video feeds that information could be used to track movements of suspect vehicles and correlate that movement to a particular situation or scenario.

The proposed research will explore the application of advanced imaging techniques to vehicle recognition. By constraining the views to limit variability it is anticipated that a richer feature set and techniques borrowed from other areas (hand geometry, facial recognition) can be employed.

The scope of the work will be to develop the fundamental techniques and algorithms in a static environment. That is, using still images as if they were extracted from a video feed. Conclusions about image quality parameters required to make a live video feed viable will also be drawn.

Follow on work based on the results is:

  • adapting the techniques to the dynamic environment of a live video feed

  • analyzing the data from multiple video cameras using AI techniques



Since the development of computers the world is awash in structured, field oriented data. The nature of that data is that it is relatively easy to know where to look for information and the attributes associated with that information. However, the increasing penetration of unstructured data generating technologies (video and audio streams, unstructured textual data on the Internet, etc.) in daily life is creating a new challenge and opportunity. In broad terms, how can these new forms of data be used to advantage? An illustrative example is set in the context of what is known as “Amber Alerts”.


Amber Alerts are used in the case of child abduction. On notification of a child’s abduction, law enforcement agencies broadcast over many different media (am/fm radio, television, and roadside signs) the description of the people involved and usually the make, model and year of the vehicle. Obviously, the general public becomes part of a large, ad-hoc surveillance network.  

At the same time, the use of video cameras to monitor and direct traffic flow is becoming common place. With the decreasing cost of video technology, cameras proliferate. With millions of miles of roadway the potential for growth is enormous. However, the feeds from these cameras are monitored by people. The ability to both monitor and track information at any level of detail is constrained by the inherent limits of human interaction. 

Identification of Problem and Goal to Be Achieved

Therefore the broad question to be addressed is “Can a new technology be developed or existing technology extended to recognize a vehicle’s make, model, year, color etc, from a live video stream?” In more technical terms, what are the features, recognition algorithms? How can the views of a vehicle from different angles as presented in video streams, be used to strengthen the recognition? Can background clutter be easily eliminated? What is the impact of varying lighting conditions and weather on recognition? What quality of video is required to enable effective recognition? Can the recognition achieved across multiple cameras be coupled to machine learning techniques to achieve an effective unattended system?

Significance of Work

The eventual goal in this research is to create computer methods for automatic identification of objects in a constrained environment. Automated object identification of motor vehicles has many potential uses including and can aid in the following:

  • Amber Alerts 

  • Terrorist movements

  • Monitoring of traffic 

  • Hit and Run – while damage is fresh 

 Research Questions to Be Investigated

Automatic object detection is a difficult undertaking. The main challenge is the amount of variation in visual appearance. An object detector must cope with the variation within the object category and with the diversity of the visual imagery. For example, cars vary in size, shape, coloring, and other details such as headlights or tires. The lighting, surrounding scenery, distance and angle of the view (an object’s pose) affects its appearance.

The central research issue is how to cope with variation in appearance. The research will investigate how to

  • Determine eigencars, car geometry or other models for feature extraction 

  • Constrain viewing angles 

  • Reinforce information by combining multiple images to get data for feature extraction 

  • Recognize background clutter 

  • Determine the relationship of accuracy to the quality of video images ( i.e. video resolution)

Limitations and Delimitations of the Study

The research will develop a model by working with set a of 2 and 4 door passenger cars. Trucks, SUVs, vans, etc. will be excluded. The model will then be applied to vehicles of differing characteristics. The goal will be to generalize the model to the highest degree possible.


In this research we are developing computer and camera methods which will automatically locate automobiles in still photographs. Our goal is to develop algorithms that are accurate and computationally efficient. Our approach is to use statistical modeling to capture the variation in automobile appearance.

Historical Overview of the Theory and Research Literature

Previous approaches used to automatically locate automobiles in still photographs and video include:

  1. Use of a set of models that each describe the statistical behavior of a group of wavelet coefficients [39, 63]; and

  2. Modeling the statistics of appearance implicitly using an artificial neural network [47, 48, 49].

However, both these have one fundamental limitation; because of limited computer memory and training data, they use a discrete number of values to describe appearance [53].

Schneiderman [53] “Describe a statistical method for 3D object detection. In this method, the 3D geometry of each object is decomposed into a small number of viewpoints. For each viewpoint, a decision rule is constructed that determines if the object is present at that specific orientation. Each decision rule uses the statistics of both object appearance and "non-object" visual appearance. Each set of statistics is represented using a product of histograms. Each histogram represents the joint statistics of a subset of wavelet coefficients and their position on the object. The approach is to use many such histograms representing a wide variety of visual attributes. Using this method, the first algorithm is developed that can reliably detect faces that vary from frontal view to full profile view and the first algorithm that can reliably detect cars over a wide range of viewpoints.”

Rajagopalan, Burlina, Chellappa [44]

“This paper describes a method for car detection from aerial images. They use a distance based classification metric on 16 x 16 regions. They cluster their training images into several classes of cars and several classes of non-cars. For each 16 x 16 input region, they compute the distance to each class. If the input is closest to a car cluster and under some threshold they classify it as a car. The distance threshold they use could be thought of as a Mahalanobis-like distance metric, except instead of normalizing distance by just 2nd order statistical moments, as in Mahalanobis distance, they use some higher order moments also. They have reported some success in detecting cars from this vantage [53]”.

Papageorgio, Poggio [39]

“In this method the Haar wavelet transform is taken of each input region. The wavelet coefficients from two of the middle frequency bands (3,030) wavelet coefficients are used as input to a quadratic classifier. The coefficients in the quadratic classifier are learned by using the Support Vector Machine training method. They report some success in detecting straight-on frontal and straight-on rear views [53]”.

Schneiderman, Kanade [54]

“This paper describes a trainable object detector and its instantiations for detecting faces and cars at any size, locations, and poses. To cope with variation in object orientation, the detector uses multiple classifiers, each spanning a different range of orientation. Each of the classifiers determines whether the object is present at a specified size within a fixed-size window. To find the object at any location and size, these classifiers scan the image exhaustively [54]”.

Summary of What is Known and Unknown

A view based approach works as follows: For each object, several detectors are built where each one is specialized to specific orientation and can accommodate small amounts of variation. To be able to detect an object at any orientation all detectors to the image are applied and the results merged such that they are spatially consistent.

The main unknown is whether an effective mechanism to identify vehicles from a video stream is feasible. Questions such as how many and the nature of the constraints imposed (e.g. which viewpoints to use) are open questions.

Contribution of the Study

The primary contribution of our study will be the creation an approach for the efficient capture / storage / retrieval of objects for the automatic detection of automobiles using still shots extracted from streaming video. A secondary contribution will be laying the groundwork to apply these techniques to detecting automobiles directly from live video feeds, as well as using these techniques to detect other objects.


Research Methods to Be Employed

This research project will use a non-experimental, quantitative research methodology.

The major repository will be a template database of vehicles and their components; second, there will be a set of individual templates of acquired vehicles.

Specific Procedures to Be Employed

The research approach is to set up a camera in various positions around the subject vehicle. Views to be captured will include front, rear, driver side and 45 degree view of driver’s side. The lighting will be held consistent as will the distance from the vehicles. The camera will capture the image of the subject vehicle and pass that image to the feature extractor. Once the features have been extracted, the resulting template will be stored in the template database.

A database of vehicle templates will be used in the matching process. The same camera equipment will be used to acquire the image of the subject vehicle and to acquire the images of the vehicles that will reside in the vehicle template database. This approach to image capture will reduce the likelihood of anomalies attributable to differences in the camera equipment used to acquire the subject and database images.

The feature extraction algorithm will build the templates to be stored in the database and the template of the subject vehicle. A matching algorithm will be used to match the template of the subject vehicle to the templates stored in the database.

The research has been broken down into the following four sub-problems:

  1. The first sub-problem encompasses acquiring the subject image and transmitting the image to the feature extractor. Professional digital camera equipment will be used as the sensing device to acquire the vehicle image. The equipment will require proper calibration and placement to effectively capture the vehicle image. The image will be transferred to the feature extractor using standard telecommunications technology.

Resolution of the images will be varied algorithmically. That is, lower resolutions images will be created from the higher resolution images captured by the cameras.

  1. The second sub-problem is the actual feature extractor that will be used to create the image template. The feature extractor will compute the salient attributes from the vehicle image [5]. A vehicle’s grill, windshield, tires, and headlights, door handles etc. are in a relatively fixed arrangement [53] and they will be the starting points of our feature extractor algorithm. Other geometric features will be explored since the constraint on the distance of the camera will allow for absolute size estimation. We have coined the term eigencars to refer to the output of the feature extractor.

  1. The third sub-problem addresses matching the template of the subject vehicle to a template in the vehicle database. An algorithm will be written to compare the similarity between the acquired vehicle template to those in the vehicle database. The algorithm will calculate a score between the acquired vehicle’s template and the templates stored in the database [5]. The vehicle templates that satisfy our pre-established threshold value will be chosen as possible matches. The vehicle templates that do not satisfy our threshold value will be rejected by the application [5].

Once the matches are chosen, the algorithm will rank the candidates by order of score. This is the hybrid approach to identification as described in [5]. The matches and non-matches both will be reviewed to establish the accuracy of the matching algorithm.

  1. The final sub-problem is directly related to sub-problem 1 outlined above. The variables that are involved with camera positioning, lighting, pose of car, etc. must all be explored to determine the optimal combinations. For example, placement of cameras in the real world has to be considered; such as at traffic lights and tollbooths. These physical constraints relate to capturing a viable image of the subject vehicle that can be used in vehicle identification.

Results Presentation

We are using the hybrid approach to the matching problem [5] and the results will be presented as series matrices. Each matrix will report the results of a different variable of our research, such as camera position. The columns of the matrix will indicate the different conditions of the variable, such as where the camera was positioned. The cells of the matrix will contain the percentage of correct matches. See Figure 3.1 for an example of the camera position matrix.

Camera Position – Feature Vector 1

Matcher Rank




45 degrees

1st choice





2nd choice





3rd choice





4th choice





Fig. 3.1 – Results: Camera Position

Projected Outcomes

We anticipate that this research will influence several different communities including law enforcement and national security. We feel the most realistic outcomes are:

  1. Additional research will be spurred in the area of vehicle recognition, improving the overall state-of-the-art in the technology.

  2. Algorithms that improve the feature extraction and matching processes will be created based upon our initial designs.

  3. An optimal traffic surveillance system can be implemented with the intelligence to identify and track targeted vehicles.

Resource Requirements

Personnel requirements include one Project Director, one Testing Coordinator, two Researchers, two Software Developers, and one Statistician. The Project Director and Testing Coordinator will work on the project full-time at 100% salary and all other staff members will work part-time at 50% of salary.

Hardware requirements include camera equipment, PCs, servers, printers, and scanners. Facilities include office space for full-time and part-time personnel; storage of camera equipment and setup of computer equipment. In addition, office furniture & fixtures and office supplies will be purchased.

Proposed Budget


...………… $65,000


...………… $35,000

Camera Equipment and Supplies

...………… $75,000


Project Director

...………… $75,000

Testing Coordinator

...………… $65,000

Researchers (2)

...………… $80,000

Software Developers

...………… $80,000


...………… $35,000


...………… $50,000

Office Furniture & Fixtures

...………… $25,000

Office Supplies

...………… $10,000


...……… $595,000

Reliability and Validity

Data that is established from research must be shown to be both reliable and valid. Stability reliability of the data from this research will be demonstrated through the test-retest approach [43]. We fully expect our results to be repeatable through multiple repetitions.

Criterion-related validity is classified as predictive validity, concurrent validity, convergent validity, and discriminate validity [42]. Predictive validity is specifically defined in [42] as “the operationalization's ability to predict something it should theoretically be able to predict.” We think that predictive validity, by definition, is the most appropriate measure to use for our research.


Anticipated Benefits

In the research, feature extraction techniques are studied. Such features can be categorized as coarse to fine features. Coarse features such as side view of a vehicle, size of a vehicle, may be used to identify or reject a certain type of vehicle such as truck or car very efficiently in real time. The fine features such as position of front light, grill size, windshield may be used to identify a certain model of the vehicle. These identified features are not only useful for vehicle identification but may also be utilized as indices to vehicle feature database for fast search. In many cases, the search of the database depends on the details of the vehicle that are provided. The processes of identifying a white truck or a 1999 white Ford Mustang convertible involve different levels of search. It should be expected that the system should take less time to match a vehicle to less specific (or coarse) feature description.

Also, experimental images of each sampled vehicle will be captured from multiple views. It would then be possible to determine which view of the vehicle is most effective in vehicles identification. The information is important for determining the optimal position of the camera. In a common camera surveillance situation, a limited number of cameras (usually one or two cameras) are used to scan a large area. For example, in the case that a camera is used to cover a multi-lane highway, each scanned picture may contain a number of vehicles and the images of each vehicle may be at slightly different angles.

Projected Outcomes

The research will explore the application of advanced imaging techniques to vehicle recognition and in doing so advance the state of art in 3D object recognition and identification using real-time video feeds. The research will focus on the detection of objects in a static environment with the intention to adapt these techniques to the dynamic environment of live video feeds outside the scope of this proposal. The outcome of the research will be the following:

  • Develop approach for identification of cars (e.g. car features, decision rules for different cars (2-door vs. 4 door) and viewpoints (orientation of camera) for same object, image quality, handling of background, lighting and other natural conditions, etc.);

  • Describe algorithm to statistically represent the approach for car identification;

  • Identification of issues impacting speed and performance of car identification;

  • Investigate performance of the approach for car identification;

  • Compare the approach to other approaches to object detection; and

  • Summarize the approach and suggest future topics for research based on our findings.

Practical Applications of the Findings

The main application of this research is real time vehicle identification system.

The identification process must be near real time to be useful for the purpose. If this can be accomplished then automatic object detection and recognition can be used to extract more information from images and help automatically label and categorize them. By making databases of large digital image collections easier to search, they will become accessible to wider groups of users and greatly enhance efforts such as the Amber Alert project allowing precious time to be saved in locating suspicious vehicles that match a set of features.

Constraints and Limitations of the Study

Object detection is difficult because images contain a large amount of data. Computer power and memory available for the study limit the size of digital image collection used which may cause some variation to occur between the physical world and study results.

In our study, the same camera equipment will be used to acquire the image of the subject vehicle and to acquire the images of the vehicles that will reside in the vehicle template database. In real world situation, it is possible that different image capture devices will be used in both the enrolling and identifying stage. The amount of anomalies attributable to differences in the camera equipment used to acquire the subject and database images are out of the scope of the study.

Recommendations for Additional Studies

There are several research problems that we see as a natural continuation of the work:

  • Applying the techniques for the efficient capture / storage / retrieval of objects to the automatic detection of automobiles from live video feeds.

  • Detection of other rigid objects – We would like to test the generality of the algorithm developed by applying it to other rigid objects such as boats and airplanes.

  • Detecting more challenging objects – There are more challenging objects we would like to detect such as animals that have some structural regularity, but less than cars.

  • Multiple identifications - There can be a number of vehicles in each captured image. By using the feature comparison technique, each vehicle could be evaluated.

  • Capture of still image - It is quite reasonable to use a motion camera for surveillance application. Using a motion video camera has an advantage over still picture in that a blocked view from one picture may be unblocked in the following picture.

Contributions to the Field and Advancement of Knowledge

The proposed research will advance the state of the art in 3D object detection in the following ways:

  • Creation an approach for the efficient capture / storage / retrieval of objects for the automatic detection automobiles from still shots extracted from streaming video.

  • Laying the groundwork to apply these techniques to automatically detecting automobiles directly from live video feeds, as well as using these techniques to detect other objects.

  • Furthering the knowledge on use of camera technology computer vision for automatic detection of objects (e.g. automatic focus, zoom on specific object detectors, etc.)


  1. Agarwal, S. and Roth, D. ‘Learning a Sparse Representation for Object Detection,” ECCV 2002.

  2. Amit, Y., “A Neural Network Architecture for Visual Selection,” Neural Computation, 12:1059-1089, 2000.

  3. Arun, K.S., Huang, T.S. and Blostein, S.D., “least-Squares fitting of two 3-D Point sets,” IEEE Transactions on Pattern Recognition and Machine Intelligence, (9):689-700, 1987.

  4. Bhanu, B. “Automatic Target recognition: A State of the Art Survey,” IEEE Transactions on Aerospace and Electronic Systems, 22, pp. 364-379, 1986.

  5. Bolle, R. M., et. al, "Guide to Biometrics", Springer, New York, 2004

  6. Burl, M. and Perona, P., “Recognition of planar Objects classes,” CVPR ’96, pp. 223-230, 1996.

  7. Burl, M.C., Weber, W. and Perona, P., “A Probabilistic Approach to Object Recognition Using Local Photometry and Global Geometry,” 5th European Conference On computer Visiojn, 1998.

  8. Casasent, D. and Neiberg, L., “Classifier and Shift-invariant Automatic Target Recognition of Neural networks,” Neural Networks, Vol. 8, pp. 1117-1129, 1995.

  9. Chen, H, Belhumeur, P. and Jacobs, D., “In Search of Illumination Invariants,” CVPR, pp. 254-261, 2000.

  10. Chow, C.K. and Liu, C.N., “Approximating discrete Probability Distributions with Dependence Trees,” IEEE Transactions on Information Theory, IT-41(3), 1966.

  11. Cotes, C. and Vapnik, V., “Support-Vector networks,” Machine Learning, 20:273-297, 1995.

  12. Cosman, P.C., Gray, R.M. and Vetterli, M., “Vector Quantization of image Subbands: A Survey,” IEEE Transactions on image Processing 5:2, pp. 2002-225, February, 1996.

  13. Domingos, P. and Pazzani, M., “On the Optimality of the Simple Bayesian Classifier Under Zero-One Loss,” Machine learning, 29, pp. 103-130, 1997.

  14. Field, D.J., “Wavelets, Vision and the Statistics of Natural Scenes,” Philosophical Transactions of the royal Society: Mathematical, Physical and Engineering Sciences,” 55(1):119-139, 1997.

  15. Fergus, R., Perona, P., Zisserman, A. “Object Class Recognition by Unsupervised Scale-Invariant Learning,” CVPR, 2003.

  16. Forth, D.A., et. al., “Invariant Descriptors for 3D Recognition and Pose,” PAMI, 13:10, 1991.

  17. Freeman, W.T. and Adelson E.H., “The Design and Use of Steerable Filters,” PAMI, 13:9, pp. 891 – 906, 1991.

  18. Freund, Y. and Shapire, R.E., “A Decision-Theoretic Generalization of On-Line Leraning and Application Boosting,” Journal of computer and system Sciences 55:1, pp. 119-139, 1997.

  19. Geman, D. and Flueret, F., “Coarse-to-Fine Face Detection,” International Journal of computer Vision,” 41:85-107, 2001.

  20. Gori, M. and Scaselli, F., “Are Multi-layer Perceptrons adequate for Pattern Recognition and Verification,” IEEE Transactions on Pattern Analysis and Machine Intelligence,” 20(11):1121-1132, 1998.

  21. Heisele, B., Serre, T., Prentice, S., Poggio, T., “Hierarchical Classification and feature Reduction for Fast Face Detection with Support Vector Machines,” Pattern Recognition, Vol. 36, No. 9, 2007-2017, 2003.

  22. Hoiem, D., Sukthankar, R. Schneiderman, H. and Huston, L., “Object-Based Image Retrieval Using the Statistical Structure of Images,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004.

  23. Kononenko, I., “Semi-Naïve Bayesian Classifer.” Sixth European Working session on Leraning, pp. 206-219, 1991.

  24. Krumm, J., “Eigenfeatures for Planar Pose Measurement of Partially occluded Objecrts,” CVPR ’96, pp. 55-60.

  25. Kung, Y., “Digital Neural Networks,” Prentice-Hall, 1993.

  26. Lades, M., Vorbruggen, J.C. Buhmann, J., Lange, J. Malsburg, C.V.D., Wurtz, R.P. and Konen, W., “Distortion Invariant Object Recognition in the Dynamic Link Architecture,” IEEE Transactions on Computers, 42(3):300-311, 1993.

  27. Lakshmi Ratan, A., Grimson, W.E.L., Wells, W.M., “Object Detection and Localization By Dynamic Template Warping,” IJVC, 36(2):131-148, 2000.

  28. Lewis II, P.M., “Approximating Probability Distributions to reduce Storage Requirements,” Information and Control, 2:214-225, 1959.

  29. Li, S.Z., Zhang, A.Q., Shum, H. and Zhang, H.J., “Floatboost Learning for Classificatiom,” NIPS 16, 2002.

  30. Minsky, M. and Papert, S., “Perceptions: An Introduction to Computational Geometry,” MIT Press, Cambridge, MA, 1988.

  31. Mikolajczyk, K., Choudhury, R. and Scmid, C., “Face Detection in a Video Sequence – a temporal Approach,” CVPR, Vol. 2, 96-101, 201.

  32. Moghaddam, B. and Pentland, A., “Probabilistic Visual Learning for Object Representation,” PAMI, 1997), pp. 696-710, July, 1997.

  33. Murase, H. and Nayar, S. “Visual Learning and Recognition of 3D Objects from Appearance,” IJCV, 14(1), 1995, pp. 5-24.

  34. Nagao, K. and Grimson, W.E.L., “Using Photometric Invariants for 3D Object Recognition,” CVIU, 71(1):74:93, 1998.

  35. Ohta, K. and Ikeuchi, K., “Recognition of Multi-Specularity Objects using Eigen-Window,” Tech. Rep. CMU-CS-96-105, Feb., ’96.

  36. Oren, M., et. al. “Pedestrian Detection Using Wavelet Templates,” CVPR, ’97. pp. 193-199.

  37. Osuna, E., Freund, R., and Girosi, F., “Training support Vector machines: An Application to Face detection,” CVPR ’97, pp. 130-136.

  38. Pandya, A.S. and Macy, R.B., “Pattern Recognition with Neural Networks in C++,” CRC Press, Boca Raton, Fl., 1996.

  39. Papageorgiou, C.P. and Poggio, T., “A trainable Object Detection System: Car Detection in Static Images.” MIT AI Memo No. 180. October 1999.

  40. Phillips, P.J., et. al., “The FERET Evaluation Methodology for Face-Recognition Algorithms,” CVPR ’97. pp. 137-143.

  41. Rajagopalan, A.N., Burlina,P.and Chellappa, R. “Higher Order Statistical Learning for Vehicle Detection in Images,” ICCV ’99. PP. 1204 -.

  42. Research Methods Knowledge Base, "Measurement of Validity Types",, 2002

  43. Research Methods Knowledge Base, "Types of Reality",, 2002

  44. Ripley, D., “Pattern recognition and Neural Networks,” Cambridge University Press, 1996.

  45. Romdhani, S., Torr, P., Scholkopf, B., Blake, A., ”Computationally Efficient Face Detection,” International Conference on Computer Vision, pp. 695-700, 2001.

  46. Roth, D., Yang, M-H. and Ahuja, N., “A SNoW-Based Face Dectector,” NPPS ’99.

  47. Rowley, H., Baluja, S. and Kanade, T., “Neural Network-Based Face Detection,“ PAMI 2091), January, 1998.

  48. Rowley, H., Neural Network-Based Face Detection,” Ph.D Thesis, CMU-CS-99-117, 1999.

  49. Rowley, H. Baluja, S. and Kanade, T., “Rotation Invariant Neural Network-Based Face Detection,” IEEE Conference on Computer Vision and Pattern Recognition, June, 1998.

  50. Sadr, J., Mukherjee, S., Thoresz, K., Sinha, P., “The Fidelity of Local Ordinal Encoding,” NIPS 14, 2002.

  51. Schiele, B. and Crowley, J.L., “Probabilistic Object recognition Using Multidimensional Receptive Histograms,” International Conference on Pattern Recognition, 1996.

  52. Schiele, B. and Crowley, J.L., “Recognition Without Correspondence Using Multidimensional Receptive Field Histograms,” International Journal of Computer Vision, 36(1):31-50, 2000.

  53. Schneiderman, H., “A Statistical Approach to 3D Object Detection Applied to Faces and Cars, A Doctoral Dissertation,” tec. Report 00-06, Robotics Institute, Carnegie Mellon University, May, 2000.

  54. Schneiderman, H. and Kande, T., “A Statistical Model for 3D Object detection Applied to Faces and Cars,” IEEE Conference on Computer Vision and Pattern Recognition, IEEE, June, 2000.

  55. Schneiderman, H. and Kande, T., “Object Detection using the Statistics of Parts,” International Journal of Computer vision, 2002.

  56. Schneiderman, H., “Learning Statistical Structure for Object Detection,” Computer Analysis of Images and patterns (CAIP), 2003.

  57. Schneiderman, H., “Feature-Centric Evaluation for Efficient Cascaded Object Detection,” IEEE Conference on Computer Vision and pattern recognition (CVPR), 2004.

  58. Schniederman, H., “Learning a Restricted Bayesian Network for Object Detection,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2004.

  59. Schneiderman, H. and Kanade, T., “Probabilistic Modeling of Local Appearance and Spatial Relationships for Object Recognition,” CVPR ’98, pp. 45-51, 1998.

  60. Shapire, R.E. and Singer, Y., “Improving Boosting Algorithms Using Confidence-rated Predictions,” Machine Learning 37:3, pp. 297-336, December, 1999.

  61. Sinha, p., Torralba, A. “Detecting Faces in Impoverished Images,” MIT AI Memo 2001-028, 2001.

  62. Slater, D. and Healey, G. “The Illumination-Invariant Recognition of 3D Objects Using Local Color Invariants,” PAMI, 18:2, pp. 206-210, 1996.

  63. Strang, G. and Nguyen, T., “Wavelets and Filter banks. Wellesley-Cambridge Press, 1997.

  64. Sung, K-K and Poggio, T., “Example-Based Learning of View-Based Human Face Detection,” ACCV ’95 and AI Memo #1521, 1572, MIT.

  65. Swain, M. and Ballard, D., “Color Indexing,” IJCV. 791):11-32, 1991.

  66. Vapnik, N., “The Nature of Statistical Learning Theory,” Springer, 1995.

  67. Vetterli, M. and Kovacevic, J., “Wavelets and Sub-band Coding,” Prentice-Hall, 1995.

  68. Viola, P. and Jones, M., “Rapid Object Detection Using Boosted Cascade of Simple Features,” IEEE Conference on Computer Vision and Pattern Recognition, 2001.

  69. Wiskott, L., Fellous, J-M, Kruger, N. and Malsburg, C.V.D., “Face Recognition by Elastic Bunch Graph Matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):775-779, 1997.

  70. Wood, J. “Invariant Pattern Recognition: A Review,” Pattern Recognition, 29:1, pp. 1-17, 1996.

  71. Wu, J., Rehg, J.M. and Mullin, M.D., “Learning a Rare Event detection Cascade by Direct Feature Selection,” NIPS, 2003.

  72. Zhang, Z.Q., Zhu, L., Li, S.Z., Zhang, H.J., “Real-Time Multi-View Face detection,” 5th International conference on Automatic Face and gesture Recognition, 2002.

  73. Zisserman, A., et. al., “3D Object Recognition Using Invariance,” Artificial Intelligence 78(1-2): 239-288, 1995.

Добавить в свой блог или на сайт


Real-Time Recognition of Automotive Vehicles Using iconOnly he is a Marxist who extends the recognition of the class struggle to the recognition of the dictatorship of the proletariat. This is the touchstone on which the real understanding and recognition of Marxism is to be tested

Real-Time Recognition of Automotive Vehicles Using iconRemote Research: Real Users, Real Time, Real Research/Nate Bolt, Tony Tulathimutte

Real-Time Recognition of Automotive Vehicles Using iconКурсовая работа по компьютерной графике «Real Time Keying»
Данная курсовая работа была посвящена исследованию и реализации Real time keying'a (кеинг в реальном времени). Проведены работы по...

Real-Time Recognition of Automotive Vehicles Using iconFinding ‘Real’ Time in Quantum Mechanics

Real-Time Recognition of Automotive Vehicles Using iconInverse filtering in Real-Time Nearfield Acoustic Holography

Real-Time Recognition of Automotive Vehicles Using iconIntegrated Research intros real-time atm monitoring software

Real-Time Recognition of Automotive Vehicles Using iconAllen, P., and R. Dannenberg. 1990. Tracking musical beats in real time. In

Real-Time Recognition of Automotive Vehicles Using iconAdvanced ice vehicles: An assessment of the technologies for next generation vehicles

Real-Time Recognition of Automotive Vehicles Using iconNew real time risk indicators to improve the efficiency, environmental impact and safety of air traffic management

Real-Time Recognition of Automotive Vehicles Using iconHardware of a digital processing of maps in real time
Россия, Ярославль, ул. Советская, 14. Тел. (4852) 79-77-75. E-mail

Разместите кнопку на своём сайте:

База данных защищена авторским правом © 2012
обратиться к администрации
Главная страница