X

Download Face Recognition Technology PowerPoint Presentation

SlidesFinder-Advertising-Design.jpg

Login   OR  Register
X


Iframe embed code :



Presentation url :

Home / Science & Technology / Science & Technology Presentations / Face Recognition Technology PowerPoint Presentation

Face Recognition Technology PowerPoint Presentation

Ppt Presentation Embed Code   Zoom Ppt Presentation

PowerPoint is the world's most popular presentation software which can let you create professional Face Recognition Technology powerpoint presentation easily and in no time. This helps you give your presentation on Face Recognition Technology in a conference, a school lecture, a business proposal, in a webinar and business and professional representations.

The uploader spent his/her valuable time to create this Face Recognition Technology powerpoint presentation slides, to share his/her useful content with the world. This ppt presentation uploaded by worldwideweb in Science & Technology ppt presentation category is available for free download,and can be used according to your industries like finance, marketing, education, health and many more.

About This Presentation

Slide 1 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang
Slide 2 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms
Slide 3 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching
Slide 4 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements
Slide 5 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture.
Slide 6 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50.
Slide 7 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback.
Slide 8 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot)
Slide 9 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding
Slide 10 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images.
Slide 11 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores).
Slide 12 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores).
Slide 13 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition
Slide 14 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows.
Slide 15 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform
Slide 16 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way.
Slide 17 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet
Slide 18 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part
Slide 19 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance
Slide 20 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale.
Slide 21 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace
Slide 22 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error.
Slide 23 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code.
Slide 24 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion
Slide 25 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model
Slide 26 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance
Slide 27 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape.
Slide 28 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out
Slide 29 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching.
Slide 30 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method
Slide 31 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods
Slide 32 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines
Slide 33 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs
Slide 34 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning
Slide 35 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation
Slide 36 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image
Slide 37 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image Deformable Models Templates are allowed to translate, rotate and deform to fit the best representation of the shape present in image Employ wavelet decomposition of the face image as key element of matching pursuit filters to find the subtle differences between faces Elastic graph approach, based on the discrete wavelet transform: a set of Gabor wavelets is applied at a set of hand-selected prominent object points, so that each point is represented by a set of filter responses, named as a Jet
Slide 38 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image Deformable Models Templates are allowed to translate, rotate and deform to fit the best representation of the shape present in image Employ wavelet decomposition of the face image as key element of matching pursuit filters to find the subtle differences between faces Elastic graph approach, based on the discrete wavelet transform: a set of Gabor wavelets is applied at a set of hand-selected prominent object points, so that each point is represented by a set of filter responses, named as a Jet Hidden Markov Models Many variations of HMM have been introduced for face recognition problem: Luminance-based 1D-HMM DCT-based 1D-HMM 2D Pseudo HMM Embedded HMM Low-Complexity 2D HMM Hybrid HMM Observable features of these systems are either raw values of the pixels in the scanning element or transformation of these values
Slide 39 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image Deformable Models Templates are allowed to translate, rotate and deform to fit the best representation of the shape present in image Employ wavelet decomposition of the face image as key element of matching pursuit filters to find the subtle differences between faces Elastic graph approach, based on the discrete wavelet transform: a set of Gabor wavelets is applied at a set of hand-selected prominent object points, so that each point is represented by a set of filter responses, named as a Jet Hidden Markov Models Many variations of HMM have been introduced for face recognition problem: Luminance-based 1D-HMM DCT-based 1D-HMM 2D Pseudo HMM Embedded HMM Low-Complexity 2D HMM Hybrid HMM Observable features of these systems are either raw values of the pixels in the scanning element or transformation of these values Support Vector Machines Being maximum margin classifiers, SVM are designed to solve two-class problems, while face recognition is a q-classes problem, q = number of known individuals Two approaches: Reformulate the face recognition problem as a two-class problem Employ a set of SVMs to solve a generic q-classes recognition problem
Slide 40 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image Deformable Models Templates are allowed to translate, rotate and deform to fit the best representation of the shape present in image Employ wavelet decomposition of the face image as key element of matching pursuit filters to find the subtle differences between faces Elastic graph approach, based on the discrete wavelet transform: a set of Gabor wavelets is applied at a set of hand-selected prominent object points, so that each point is represented by a set of filter responses, named as a Jet Hidden Markov Models Many variations of HMM have been introduced for face recognition problem: Luminance-based 1D-HMM DCT-based 1D-HMM 2D Pseudo HMM Embedded HMM Low-Complexity 2D HMM Hybrid HMM Observable features of these systems are either raw values of the pixels in the scanning element or transformation of these values Support Vector Machines Being maximum margin classifiers, SVM are designed to solve two-class problems, while face recognition is a q-classes problem, q = number of known individuals Two approaches: Reformulate the face recognition problem as a two-class problem Employ a set of SVMs to solve a generic q-classes recognition problem Advantages of Face Recognition Systems Non-intrusive – Other biometrics require subject co-operation and awareness. eg. Iris recognition –looking into eye scanner Placing hand on fingerprint reader Biometric data readable and can be verified by a human. No association with crime.
Slide 41 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image Deformable Models Templates are allowed to translate, rotate and deform to fit the best representation of the shape present in image Employ wavelet decomposition of the face image as key element of matching pursuit filters to find the subtle differences between faces Elastic graph approach, based on the discrete wavelet transform: a set of Gabor wavelets is applied at a set of hand-selected prominent object points, so that each point is represented by a set of filter responses, named as a Jet Hidden Markov Models Many variations of HMM have been introduced for face recognition problem: Luminance-based 1D-HMM DCT-based 1D-HMM 2D Pseudo HMM Embedded HMM Low-Complexity 2D HMM Hybrid HMM Observable features of these systems are either raw values of the pixels in the scanning element or transformation of these values Support Vector Machines Being maximum margin classifiers, SVM are designed to solve two-class problems, while face recognition is a q-classes problem, q = number of known individuals Two approaches: Reformulate the face recognition problem as a two-class problem Employ a set of SVMs to solve a generic q-classes recognition problem Advantages of Face Recognition Systems Non-intrusive – Other biometrics require subject co-operation and awareness. eg. Iris recognition –looking into eye scanner Placing hand on fingerprint reader Biometric data readable and can be verified by a human. No association with crime. Applications for Face Recognition Technology Government Use Law Enforcement Counter Terrorism Immigration Legislature Commercial Use Day Care Gaming Industry Residential Security E-Commerce Voter Verification Banking
Slide 42 - Face Recognition Shivankush Aras ArunKumar Subramanian Zhi Zhang Overview Of Face Recognition Face Recognition Technology involves Analyzing facial Characteristics Storing features in a database Using them to identify users Facial Scan process flow :- Sample Capture – sensors Feature Extraction – creation of template Template Comparison – * Verification - 1 to 1 comparison - gives yes/no decision * Identification - 1 to many comparison - gives ranked list of matches 4. Matching – Uses different matching algorithms Technically a three-step procedure :- Sensor – * takes observation. * develops biometric signature. Eg. Camera. Normalization – * same format as signature in database. * develops normalized signature. Eg. Shape alignment, intensity correction Matcher – * compares normalized signature with the set of normalized signature in system database. * gives similarity score or distance measure. Eg. Bayesian technique for matching Considerations for a potential Face Recognition System Mode of operation Size of database for identification or watch list Demographics of anticipated users. Lighting conditions. System installed overtly or covertly User behavior How long since last image enrolled Required throughput rate Minimum accuracy requirements Primary Facial Scan Technologies 1. Eigenfaces – “one’s own face” * Utilizes the two dimensional global grayscale images representing distinctive characteristics. 2. Feature Analysis – * accommodates changes in appearance or facial aspect. 3. Neural Networks – * features from enrollment and verification face vote on match. 4. Automatic Face Processing – * uses distance and distance ratios * used in dimly lit, frontal image capture. Sensors Used for image capture Standard off-the-shelf PC cameras, webcams. Requirements: * Sufficient processor speed (main factor) * Adequate Video card. * 320 X 240 resolution. * 3-5 frames per second. ( more frames per second and higher resolution lead to a better performance.) One of the cheaper, inexpensive technologies starting at $ 50. FaceCam Developed by VisionSphere. Face recognition technology integrated with speech recognition in one device. Features User-friendly. Cost-effective. Non-intrusive. Auto-enrollment Auto-location of user. Voice prompting. Immediate user feedback. Components of FaceCam Integrated Camera LCD Display Panel Alpha-Numeric keypad Speaker, Microphone Attached to Pentium II class IBM compatible PC (containing an NTSC capture card and VisionSphere’s face recognition software) Advantages of FaceCam Liveness test is performed. False Accept rate and False Reject Rate is approximately 1%. Other sensors A4Vision technology-uses structured light in near-infrared range. PaPeRo (NEC’s Partner-type Personal Robot) Feature Extraction Dimensionality Reduction Transforms Karhunen-Loeve Transform/Expansion Principal Component Analysis Singular Value Decomposition Linear Discriminant Analysis Fisher Discriminant Analysis Independent Discriminant analysis Discrete Cosine transform Gabor Wavelet Spectrofaces Fractal image coding Dimensionality Reduction Transforms Karhunuen-Loeve Transform The KL Transform operates a dimensionality reduction on the basis of a statistical analysis of the set of images from their covariance matrix. Eigenvectors and the EigenValues of the covariance matrix are calculated and only only the eigenvectors corresponding to the largest eigenvalues are retained i.e. those in which the images present the higher variance. Once the Eigenvectors (referred to as eigenpictures) are obtained, any image can be approximately reconstructed using a weighted combination of eigenpictures. The higher the number of eigenpictures, the more accurate is the approximation of face images. Principal Component Analysis Each spectrum in the calibration set would have a different set of scaling constants for each variation since the concentrations of the constituents are all different. Therefore, the fraction of each "spectrum" that must be added to reconstruct the unknown data should be related to the concentration of the constituents The "variation spectra" are often called eigenvectors (a.k.a., spectral loadings, loading vectors, principal components or factors), for the methods used to calculate them. The scaling constants used to reconstruct the spectra are generally known as scores. This method of breaking down a set spectroscopic data into its most basic variations is called Principal Components Analysis (PCA). PCA breaks apart the spectral data into the most common spectral variations (factors, eigenvectors, loadings) and the corresponding scaling coefficients (scores). Other Dimensionality reduction transforms Factor Analysis is a statistical method for modeling the covariance structure of high dimensional data using a smal number of latent variables, has analogue with PCA. LDA/FDA – training carried out via scatter-matrix analysis. Singular Value Decomposition Discrete Cosine Transform DCT is a transform used to compress the representation of the data by discarding redundant information. Adopted by JPEG Analogous to Fourier Transform, DCT transforms signals or images from the spatial domain to the frequency domain by means of sinusoidal basis functions, only that DCT adopts real sine functions. DCT basis are independent on the set of images. DCT is not applied on the entire image, but is taken from square-sampling windows. Discrete Cosine Transform Gabor Wavelet The preprocessing of images by Gabor wavelets is chosen for its biological relevance and technical properties. The Gabor wavelets are of similar shape as the receptive fields of simple cells in the primary visual cortex. They are localized in both space and frequency domains and have the shape of plane waves restricted by a Gaussian envelope function. Capture properties of spatial localization, orientation selectivity, spatial frequency selectivity and quadrature phase relationship. A simple model for the responses of simple cells in the primary visual cortex. It extracts edge and shape information. It can represent face image in a very compact way. Gabor Wavelet Gabor Wavelet Real Part Imaginary Part Gabor Wavelet Advantages: Fast Acceptable accuracy Small training set Disadvantages: Affected by complex background Slightly rotation invariance SpectroFace Face representation method using wavelet transform and Fourier Transform and has been proved to be invariant to translation, on-the-plane rotation and scale. First order Second order The first order spectroface extracts features, which are translation invariant and insensitive to facial expressions, small occlusions and minor pose changes. Second order spectroface extracts features that are invariant to on-the-plane rotation and scale. SpectroFace Fractal image Coding An arbitrary image is encoded into a set of transformations, usually affine. In order to obtain a fractal model of a face image, the image is partitioned into non-overlapping smaller blocks (range) and overlapping blocks (domain). A domain pool is prepared from the available domain blocks. For each range block, a search is done through the domain pool to find a domain block whose contactive information best approximates the range block. A distance metric such as RMS can find the approximation error. Fractal Image Coding Main Characteristic Relies on the assumption that image redundancy can be efficiently captured and exploited through piecewise self-transformability on a block-wise basis, and that it approximates an original image with the fractal image, obtained from a finite number of iterations of an image transformation called fractal code. Data Acquisition problems Illumination Pose Variation Emotion Illumination problem in face recognition Variability in Illumination Contrast Model Approaches to counter illumination problem Heuristic Approaches Discards the three most significant components Assumes that the first few principal components capture only variation in lighting Image Comparison Approaches Uses image representations such as edge maps, derivatives of graylevel, images filtered with 2D gabor like functions and a representation that combines a log function of the intensity to these representations. Based on the observation that the difference between the two images of the same object is smaller than the difference between images of different objects. Extracts Distance measures such as Point wise distance Regional distance Affine-GL distance Local Affine-GL distance Log pointwise distance Class-based Approaches Requires three aligned training images acquired under different lighting conditions. Kohonen’s SOM Assumes that faces of different individuals have the same shape and different textures. Advantageous as it uses a small set of images. 3D-Model based Approaches An eigenhead approximation of a 3D head was obtained after training on about 300 laser-scanned range images of real human heads. Transforms shape-from-shading problem to a parametric problem An alternative – Symmetric SFS which allows theoretically pointwise 3D information about a symmetric object, to be uniquely recovered from a 2D iaage. Based on the observation that all the faces have the similar 3D shape. Pose Problem in Face Recognition Performance of biometric systems drops significantly when pose variations are present in the image. Rotation problem Methods of handling the rotation problem Multi-image based approaches Multiple images of each person is used Hybrid Approaches Multiple images are used during training, but only one database image per person is used during recognition Single Image based approaches No pose training is carried out Multi-Image based approaches Uses a Template-base correlation matching scheme. For each hypothesized pose, the input image is aligned to database images corresponding to that pose. The alignment is carried out via a 2D affine transformation based on three key feature points Finally, correlation scores of all pairs of matching templates are used for recognition. Limitations Many different views per person are needed in the database No lighting variations or facial expressions are allowed High computational cost due to iterative searching. Hybrid Approaches Most successful and practical Make use of prior class information Methods Linear class-based method Graph-matching based method View-based eigenface method Single-Image Based Approaches Includes Low-level feature-based methods Invariant feature based methods 3D model based methods Matching Schemes Nearest Neighbor Neural Networks Deformable Models Hidden Markov Models Support Vector Machines Nearest Neighbor A naïve Nearest Neighbor classifier is usually employed in the approaches that adopt a dimensionality reduction technique. Extract the most representative/discriminant features by projecting the images of the training set in an appropriate subspace of the original space Represent each training image as a vector of weights obtained by the projection operation Represent the test image also by the vectors of weights, then compare these vectors to the training images in the reduced space to determine which class it belongs Neural Networks A NN approach to Gender Classification: Using vectors of numerical attributes, such as eyebrow thickness, widths of nose and mouth, chin radius, etc Two HyperBF networks were trained for each gender By extending feature vectors, and training one HyperBF for each person, this system can be extended to perform face recognition A fully automatic face recognition system based on Probabilistic Decision-Based NN (PDBNN): A hierarchical modular structure DBNN and LUGS learning Neural Networks - Cont A hybrid NN solution Combining local image sampling, a Self-Organizing Map (SOM) NN and a convolutional NN SOM provides quantization of the image samples into a topological space where nearby inputs in the original space are also nearby, thereby providing dimensionality reduction and invariance to minor changes in the image sample Convolutional NN provides for partial invariance to translation, rotation, scale, and deformation Neural Networks - Cont A system based on Dynamic Link Architecture (DLA) DLAs use synaptic plasticity and are able to instantly form sets of neurons grouped into structured graphs and maintain the advantages of neural systems Gabor based wavelets for the features are used The structure of signal is determined by 3 factors: input image, random spontaneous excitation of the neurons, and interaction with the cells of the same or neighboring nodes Binding between neurons is encoded in the form of temporal correlation and is induced by the excitatory connections within the image Deformable Models Templates are allowed to translate, rotate and deform to fit the best representation of the shape present in image Employ wavelet decomposition of the face image as key element of matching pursuit filters to find the subtle differences between faces Elastic graph approach, based on the discrete wavelet transform: a set of Gabor wavelets is applied at a set of hand-selected prominent object points, so that each point is represented by a set of filter responses, named as a Jet Hidden Markov Models Many variations of HMM have been introduced for face recognition problem: Luminance-based 1D-HMM DCT-based 1D-HMM 2D Pseudo HMM Embedded HMM Low-Complexity 2D HMM Hybrid HMM Observable features of these systems are either raw values of the pixels in the scanning element or transformation of these values Support Vector Machines Being maximum margin classifiers, SVM are designed to solve two-class problems, while face recognition is a q-classes problem, q = number of known individuals Two approaches: Reformulate the face recognition problem as a two-class problem Employ a set of SVMs to solve a generic q-classes recognition problem Advantages of Face Recognition Systems Non-intrusive – Other biometrics require subject co-operation and awareness. eg. Iris recognition –looking into eye scanner Placing hand on fingerprint reader Biometric data readable and can be verified by a human. No association with crime. Applications for Face Recognition Technology Government Use Law Enforcement Counter Terrorism Immigration Legislature Commercial Use Day Care Gaming Industry Residential Security E-Commerce Voter Verification Banking State of the art Three protocols for system evaluation are FERET, XM2VTS and FVRT Commercial applications of FRT include face verification based ATM and access control and Law enforcement applications include video surveillance. Both global (based on KL expansion) and local (domain knowledge –face shape, eyes, nose etc.) face descriptors are useful. Open Research Problems No general solutions for variations in face images like illumination and pose problems. Problem of aging ???