Skip to main content

Research Repository

Advanced Search

Attention is Everything You Need: Case on Face Mask Classification

Pratama, Nanda; Harianto, Dody; Filbert, Stefan; Warnars, Harco Leslie Hendric Spits; Muyeba, Maybin K.

Attention is Everything You Need: Case on Face Mask Classification Thumbnail


Authors

Nanda Pratama

Dody Harianto

Stefan Filbert

Harco Leslie Hendric Spits Warnars



Abstract

Automated face mask classification has surfaced recently following the COVID-19 mask wearing regulations. The current State-of-The-Art of this problem uses CNN-based methods such as ResNet. However, attention-based models such as Transformers emerged as one of the alternatives to the status quo. We explored the Transformer-based model on the face mask classification task using three models: Vision Transformer (ViT), Swin Transformer, and MobileViT. Each model is evaluated with a top-1 accuracy score of 0.9996, 0.9983, and 0.9969, respectively. We concluded that the Transformer-based model has the potential to be explored further. We recommended that the research community and industry explore its integration implementation with CCTV

Journal Article Type Article
Acceptance Date Jul 6, 2024
Online Publication Date Nov 25, 2023
Publication Date Nov 25, 2023
Deposit Date Oct 4, 2024
Publicly Available Date Oct 4, 2024
Journal Procedia Computer Science
Print ISSN 1877-0509
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 227
Pages 372-380
DOI https://doi.org/10.1016/j.procs.2023.10.536
Keywords Face Mask, Classification, Convolutional Neural Network, Attention, Transformer, Deep Learning

Files





You might also like



Downloadable Citations