Prediksi Gender Berdasarkan Citra Mata Menggunakan Metode Convolutional Neural Network, Inception dan MobileNet
DOI:
https://doi.org/10.51967/tanesa.v23i1.1272Keywords:
Transfer Learning, CNN, InceptionV3, MobileNetAbstract
Gender manusia dianggap sebagai sifat demografis utama karena berbagai penggunaannya dalam domain praktis. Klasifikasi gender manusia dalam lingkungan yang tidak dibatasi adalah tugas yang rumit karena variasi besar dalam skenario gambar. Karena banyaknya gambar internet, akurasi klasifikasi menderita dari metode pembelajaran mesin tradisional. Penelitian ini bertujuan untuk mengefektifkan proses klasifikasi gender dengan menggunakan konsep transfer learning. Hasil eksperimen menunjukkan bahwa model CNN, InceptionV3 dan MobileNet menggunakan transfer learning tiga hidden layer, setiap hidden layer terdiri dari convolutional layer, aktivasi ReLU dan max-pooling dapat mengklasifikasikan citra berjenis kelamin – laki-laki dan perempuan dengan tingkat yang baik. Akurasi ini juga karena pengoptimal kinerja yang digabungkan selama pelatihan dan pengujian data menggunakan pengoptimal Adam. Hasil pengujian dan evaluasi menggunakan menghasilkan nilai loss terendah untuk model MobileNet sebesar 0,149, dan juga nilai akurasi tertinggi 0,9390. Dengan hasil percobaan ini, maka kesimpulan yang dapat diambil adalah bahwa model MobileNet memiliki tingkat akurasi yang paling tinggi dalam proses pelatihan dan pengujian.
Gender manusia dianggap sebagai sifat demografis utama karena berbagai penggunaannya dalam domain praktis. Klasifikasi gender manusia dalam lingkungan yang tidak dibatasi adalah tugas yang rumit karena variasi besar dalam skenario gambar. Karena banyaknya gambar internet, akurasi klasifikasi menderita dari metode pembelajaran mesin tradisional. Penelitian ini bertujuan untuk mengefektifkan proses klasifikasi gender dengan menggunakan konsep transfer learning. Hasil eksperimen menunjukkan bahwa model CNN, InceptionV3 dan MobileNet menggunakan transfer learning tiga hidden layer, setiap hidden layer terdiri dari convolutional layer, aktivasi ReLU dan max-pooling dapat mengklasifikasikan citra berjenis kelamin – laki-laki dan perempuan dengan tingkat yang baik. Akurasi ini juga karena pengoptimal kinerja yang digabungkan selama pelatihan dan pengujian data menggunakan pengoptimal Adam. Hasil pengujian dan evaluasi menggunakan menghasilkan nilai loss terendah untuk model MobileNet sebesar 0,149, dan juga nilai akurasi tertinggi 0,9390. Dengan hasil percobaan ini, maka kesimpulan yang dapat diambil adalah bahwa model MobileNet memiliki tingkat akurasi yang paling tinggi dalam proses pelatihan dan pengujian.
References
Antipov, G., Berrani, S. A., & Dugelay, J. L. (2016). Minimalistic CNN-based ensemble model for gender prediction from face images. Pattern recognition letters, 70, 59-65. https://doi.org/10.1016/j.patrec.2015.11.011.
Buchala, S., Davey, N., Frank, R. J., Gale, T. M., Loomes, M. J., & Kanargard, W. (2004, November). Gender classification of face images: The role of global and feature-based information. In International Conference on Neural Information Processing (pp. 763-768). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540- 30499-9_117.
Budiarto, J., & Qudsi, J. (2018). Deteksi Citra Kendaraan Berbasis Web Menggunakan Javascript Framework Library. MATRIK: Jurnal Manajemen, Teknik Informatika dan Rekayasa Komputer, 18(1), 125-133. https://doi.org/10.30812/matrik.v18i1.325.
Dai, X., & Gao, Z. (2013). From model, signal to knowledge: A data-driven perspective of fault detection and diagnosis. IEEE Transactions on Industrial Informatics, 9(4), 2226-2238.
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T. (2014, January). Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning (pp. 647-655). PMLR.
Fuan, W., Hongkai, J., Haidong, S., Wenjing, D., & Shuaipeng, W. (2017). An adaptive deep convolutional neural network for rolling bearing fault diagnosis. Measurement Science and Technology, 28(9), 095005.
Fukushima, K., & Miyake, S. (1982). Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in neural nets (pp. 267-285). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-46466- 9_18.
Ghosh, A., Sufian, A., Sultana, F., Chakrabarti, A., & De, D. (2020). Fundamental concepts of convolutional neural network. In Recent Trends and Advances in Artificial Intelligence and Internet of Things (pp. 519-567). Springer, Cham.
Huang, G. B., Mattar, M., Berg, T., & Learned-Miller, E. (2008, October). Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Workshop on faces in'Real-Life'Images: detection, alignment, and recognition. https://hal.inria.fr/inria-00321923/.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-network.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444.
Levi, G., & Hassner, T. (2015). Age and gender classification using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 34-42).
Lian, H. C., & Lu, B. L. (2006, May). Multi-view gender classification using local binary patterns and support vector machines. In International Symposium on Neural Networks (pp. 202-209). Springer, Berlin, Heidelberg. https://doi.org/10.1007/11760023_30.
Liao, L., Jin, W., & Pavel, R. (2016). Enhanced restricted Boltzmann machine with prognosability regularization for prognostics and health assessment. IEEE Transactions on Industrial Electronics, 63(11), 7076-7083.
Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision (pp. 3730-3738).
Mansanet, J., Albiol, A., & Paredes, R. (2016). Local deep neural networks for gender recognition. Pattern Recognition Letters, 70, 80-86. https://doi.org/10.1016/j.patrec.2015.11.015.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3), 211-252. https://doi.org/10.1007/s11263- 015-0816-y.
Sun, N., Zheng, W., Sun, C., Zou, C., & Zhao, L. (2006, May). Gender classification based on boosting local binary pattern. In International symposium on neural networks (pp. 194-201). Springer, Berlin, Heidelberg. https://doi.org/10.1007/11760023_29.
Wen, Y., Li, Z., & Qiao, Y. (2016). Latent factor guided convolutional neural networks for age-invariant face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4893-4901). http://openaccess.thecvf.com/content_cvpr_ 2016/html/W en_Latent_Factor_Guide d_CV PR_2016_paper.html.
Zhao, R., Yan, R., Chen, Z., Mao, K., Wang, P., & Gao, R. X. (2019). Deep learning and its applications to machine health monitoring. Mechanical Systems and Signal Processing, 115, 213-237.
Downloads
Published
How to Cite
Issue
Section
License
The copyright of this article is transferred to Buletin Poltanesa and Politeknik Pertanian Negeri Samarinda, when the article is accepted for publication. the authors transfer all and all rights into and to paper including but not limited to all copyrights in the Buletin Poltanesa. The author represents and warrants that the original is the original and that he/she is the author of this paper unless the material is clearly identified as the original source, with notification of the permission of the copyright owner if necessary.
A Copyright permission is obtained for material published elsewhere and who require permission for this reproduction. Furthermore, I / We hereby transfer the unlimited publication rights of the above paper to Poltanesa. Copyright transfer includes exclusive rights to reproduce and distribute articles, including reprints, translations, photographic reproductions, microforms, electronic forms (offline, online), or other similar reproductions.
The author's mark is appropriate for and accepts responsibility for releasing this material on behalf of any and all coauthor. This Agreement shall be signed by at least one author who has obtained the consent of the co-author (s) if applicable. After the submission of this agreement is signed by the author concerned, the amendment of the author or in the order of the author listed shall not be accepted.