YOLO 3 for car detection and object localization, Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - You
Only Look Once: Unified, Real-Time Object Detection (2015), #autonomous_driving
Resnet50 for hand sign recognition, Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed,
S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2014). Going Deeper with Convolutions.,
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - Deep Residual Learning for Image Recognition (2015)
Transfer learning and fine-tuning pretrained models for car recognition
Inception network, FaceNet in Face recognition: Reimplementation of "Florian Schroff, Dmitry Kalenichenko, James Philbin (2015).
FaceNet: A Unified Embedding for Face Recognition and Clustering"
Neural style transfer, "Leon A. Gatys, Alexander S. Ecker,
Matthias Bethge, (2015). A Neural Algorithm of Artistic Style", #generative,
Classifying volcanoes on Venus, trained on volcanoes on Venus dataset (Pytorch)
Generating fruit image using variational autoencoders, #generative
Make a video(partially implemented - in-progress), Singer, U., Polyak, A., Hayes, T., Yin, X., An, J., Zhang, S., Hu, Q., Yang, H., Ashual,
O., Gafni, O., Parikh, D., Gupta, S.,& Taigman, Y. (2022). Make-A-Video: Text-to-Video Generation without
Text-Video Data. #generative
Natural Language Processing
Neural machine translation using Attention mechanisms
Neural machine translation (fr-to-en) using original Transformer model (Implemented using
TransformerX library),
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin,
I. (2017). Attention Is All You Need.
Fine-tuning TF-hub pretrained models for text classification and visualizing metrics using Tensorboard
Character-level text generation using RNNs, #generative
Word-level text generation using LSTM to generate poems in the style of Shakespeare, #generative
Trained sequence models using various embedding layers and word vector representations i.e. Word2Vec,
GloVe word vectors
Sentiment classifier using LSTM and GloVe-6B-50d word vector representations for suggesting most relevant emojis regarding the input text,
trained on EMOJISET dataset
Trained sequence models using attention mechanisms
Tweet emotion recognition
Speech processing
Improvise Jazz solo using an LSTM model trained on a corpus of Jazz music, #generative
Deep learning for trigger word detection
Deep learning frameworks
Tensorflow: Mainly used it in different projects such as python libraries, CV, NLP projects.
Keras: Used it along Tensorflow
Pytorch: Brief experience in implementing a few models
Programming Languages
Python: Proficient in developing Python-based solutions for machine learning and deep learning tasks,
including data pre-processing, feature engineering, and model development and evaluation.
C++: Basic understanding of C++14, with experience using it for implementing computer vision
algorithms using OpenCV library. Also, Developed various algorithms for data processing and analysis using
C++98 in university coursework, including sorting, searching, and graph algorithms.
Libraries
Numpy and Pandas: Experience in using NumPy and Pandas for data manipulation, cleaning, and transformation.
Scikit-learn: Experience with Scikit-learn for implementing supervised and unsupervised machine
learning algorithms for classification, regression, and clustering tasks. Experience mainly gained
through university coursework and a few standalone projects.
Matplotlib: Familiar with Matplotlib for data visualization and plotting.
Tools
NVIDIA TensorRT: Basic understanding of using TensorRT to optimize deep learning models for inference on
NVIDIA GPUs.
CUDA: Basic understanding of using CUDA for parallel computing and accelerating deep learning
algorithms.
Note: Experience gained through different courses and a few personal projects.
Software engineering
Developed multiple open-source Python libraries on GitHub and deployed them on PyPI, including TransformerX,
Emgraph, and Bigraph. These libraries are actively downloaded and being used by users.
Designed and implemented the architecture for the libraries using best practices such as object-oriented
programming, modular design, and version control with Git.
Collaborated with other developers on GitHub to contribute to open-source projects and perform code
reviews for pull requests.
Developed a Telegram music search engine named TASE using Python, integrating various APIs and technologies
such as Elasticsearch, Pyrogram, and ArangoDB. Implemented a scalable and fault-tolerant architecture
using RabbitMQ and Celery.
Utilized agile development methodologies such as Kanban to manage project tasks and ensure timely delivery
of features.
Developed documentation and test cases for the libraries and the search engine, ensuring high code quality
and maintainability.
Developed high-performance web applications using asynchronous and multiprocessing programming techniques in
Python, leveraging libraries such as `multiprocessing`, `threading`, `ascyncio`, and `concurrent.futures`.
Technical Writing and AI Research Blog
Platforms: Medium and Towards Data Science
Articles: Authored and published articles on attention mechanisms and Transformers:
Notes: Conducted research and synthesized complex technical concepts into clear and accessible content for a broad audience.
Languages and Interests
Languages: Fluent in English, Kurdish, and Persian.
Interests: I enjoy staying up-to-date on the latest developments in the field of artificial
intelligence, and regularly read outstanding papers in the field. In my free time, I enjoy strolling around
the city or hiking.